00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 4061 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3651 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.017 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.018 The recommended git tool is: git 00:00:00.018 using credential 00000000-0000-0000-0000-000000000002 00:00:00.020 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.032 Fetching changes from the remote Git repository 00:00:00.036 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.053 Using shallow fetch with depth 1 00:00:00.053 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.053 > git --version # timeout=10 00:00:00.078 > git --version # 'git version 2.39.2' 00:00:00.078 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.099 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.099 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.538 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.550 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.562 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.562 > git config core.sparsecheckout # timeout=10 00:00:02.576 > git read-tree -mu HEAD # timeout=10 00:00:02.592 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.615 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.615 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.819 [Pipeline] Start of Pipeline 00:00:02.832 [Pipeline] library 00:00:02.833 Loading library shm_lib@master 00:00:02.833 Library shm_lib@master is cached. Copying from home. 00:00:02.847 [Pipeline] node 00:00:02.861 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:02.863 [Pipeline] { 00:00:02.876 [Pipeline] catchError 00:00:02.878 [Pipeline] { 00:00:02.892 [Pipeline] wrap 00:00:02.900 [Pipeline] { 00:00:02.907 [Pipeline] stage 00:00:02.909 [Pipeline] { (Prologue) 00:00:02.925 [Pipeline] echo 00:00:02.927 Node: VM-host-WFP7 00:00:02.932 [Pipeline] cleanWs 00:00:02.942 [WS-CLEANUP] Deleting project workspace... 00:00:02.942 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.949 [WS-CLEANUP] done 00:00:03.138 [Pipeline] setCustomBuildProperty 00:00:03.229 [Pipeline] httpRequest 00:00:03.547 [Pipeline] echo 00:00:03.548 Sorcerer 10.211.164.20 is alive 00:00:03.553 [Pipeline] retry 00:00:03.554 [Pipeline] { 00:00:03.561 [Pipeline] httpRequest 00:00:03.565 HttpMethod: GET 00:00:03.565 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.566 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.567 Response Code: HTTP/1.1 200 OK 00:00:03.567 Success: Status code 200 is in the accepted range: 200,404 00:00:03.567 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.713 [Pipeline] } 00:00:03.726 [Pipeline] // retry 00:00:03.731 [Pipeline] sh 00:00:04.011 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.030 [Pipeline] httpRequest 00:00:04.388 [Pipeline] echo 00:00:04.389 Sorcerer 10.211.164.20 is alive 00:00:04.399 [Pipeline] retry 00:00:04.402 [Pipeline] { 00:00:04.416 [Pipeline] httpRequest 00:00:04.423 HttpMethod: GET 00:00:04.423 URL: http://10.211.164.20/packages/spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:00:04.425 Sending request to url: http://10.211.164.20/packages/spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:00:04.428 Response Code: HTTP/1.1 200 OK 00:00:04.429 Success: Status code 200 is in the accepted range: 200,404 00:00:04.429 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:00:13.504 [Pipeline] } 00:00:13.522 [Pipeline] // retry 00:00:13.530 [Pipeline] sh 00:00:13.810 + tar --no-same-owner -xf spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:00:16.363 [Pipeline] sh 00:00:16.648 + git -C spdk log --oneline -n5 00:00:16.648 557f022f6 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:00:16.648 c0b2ac5c9 bdev: Change void to bdev_io pointer of parameter of _bdev_io_submit() 00:00:16.648 92fb22519 dif: dif_generate/verify_copy() supports NVMe PRACT = 1 and MD size > PI size 00:00:16.648 79daf868a dif: Add SPDK_DIF_FLAGS_NVME_PRACT for dif_generate/verify_copy() 00:00:16.648 431baf1b5 dif: Insert abstraction into dif_generate/verify_copy() for NVMe PRACT 00:00:16.669 [Pipeline] withCredentials 00:00:16.680 > git --version # timeout=10 00:00:16.693 > git --version # 'git version 2.39.2' 00:00:16.711 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:16.713 [Pipeline] { 00:00:16.724 [Pipeline] retry 00:00:16.727 [Pipeline] { 00:00:16.746 [Pipeline] sh 00:00:17.031 + git ls-remote http://dpdk.org/git/dpdk main 00:00:17.303 [Pipeline] } 00:00:17.325 [Pipeline] // retry 00:00:17.331 [Pipeline] } 00:00:17.351 [Pipeline] // withCredentials 00:00:17.363 [Pipeline] httpRequest 00:00:18.094 [Pipeline] echo 00:00:18.096 Sorcerer 10.211.164.20 is alive 00:00:18.106 [Pipeline] retry 00:00:18.108 [Pipeline] { 00:00:18.123 [Pipeline] httpRequest 00:00:18.128 HttpMethod: GET 00:00:18.129 URL: http://10.211.164.20/packages/dpdk_f4ccce58c1a33cb41e1e820da504698437987efc.tar.gz 00:00:18.129 Sending request to url: http://10.211.164.20/packages/dpdk_f4ccce58c1a33cb41e1e820da504698437987efc.tar.gz 00:00:18.149 Response Code: HTTP/1.1 200 OK 00:00:18.149 Success: Status code 200 is in the accepted range: 200,404 00:00:18.150 Saving response body to /var/jenkins/workspace/raid-vg-autotest/dpdk_f4ccce58c1a33cb41e1e820da504698437987efc.tar.gz 00:01:28.401 [Pipeline] } 00:01:28.420 [Pipeline] // retry 00:01:28.427 [Pipeline] sh 00:01:28.721 + tar --no-same-owner -xf dpdk_f4ccce58c1a33cb41e1e820da504698437987efc.tar.gz 00:01:30.117 [Pipeline] sh 00:01:30.397 + git -C dpdk log --oneline -n5 00:01:30.397 f4ccce58c1 doc: allow warnings in Sphinx for DTS 00:01:30.397 0c0cd5ffb0 version: 24.11-rc3 00:01:30.397 8c9a7471a0 dts: add checksum offload test suite 00:01:30.397 bee7cf823c dts: add checksum offload to testpmd shell 00:01:30.397 2eef9a80df dts: add dynamic queue test suite 00:01:30.414 [Pipeline] writeFile 00:01:30.429 [Pipeline] sh 00:01:30.710 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:30.720 [Pipeline] sh 00:01:31.003 + cat autorun-spdk.conf 00:01:31.003 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.003 SPDK_RUN_ASAN=1 00:01:31.003 SPDK_RUN_UBSAN=1 00:01:31.003 SPDK_TEST_RAID=1 00:01:31.003 SPDK_TEST_NATIVE_DPDK=main 00:01:31.003 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:31.003 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:31.010 RUN_NIGHTLY=1 00:01:31.012 [Pipeline] } 00:01:31.025 [Pipeline] // stage 00:01:31.039 [Pipeline] stage 00:01:31.042 [Pipeline] { (Run VM) 00:01:31.054 [Pipeline] sh 00:01:31.337 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:31.337 + echo 'Start stage prepare_nvme.sh' 00:01:31.337 Start stage prepare_nvme.sh 00:01:31.337 + [[ -n 2 ]] 00:01:31.337 + disk_prefix=ex2 00:01:31.337 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:31.337 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:31.337 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:31.337 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.337 ++ SPDK_RUN_ASAN=1 00:01:31.337 ++ SPDK_RUN_UBSAN=1 00:01:31.337 ++ SPDK_TEST_RAID=1 00:01:31.337 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:31.337 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:31.337 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:31.337 ++ RUN_NIGHTLY=1 00:01:31.337 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:31.337 + nvme_files=() 00:01:31.337 + declare -A nvme_files 00:01:31.337 + backend_dir=/var/lib/libvirt/images/backends 00:01:31.337 + nvme_files['nvme.img']=5G 00:01:31.337 + nvme_files['nvme-cmb.img']=5G 00:01:31.337 + nvme_files['nvme-multi0.img']=4G 00:01:31.337 + nvme_files['nvme-multi1.img']=4G 00:01:31.337 + nvme_files['nvme-multi2.img']=4G 00:01:31.337 + nvme_files['nvme-openstack.img']=8G 00:01:31.337 + nvme_files['nvme-zns.img']=5G 00:01:31.337 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:31.337 + (( SPDK_TEST_FTL == 1 )) 00:01:31.337 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:31.337 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:31.337 + for nvme in "${!nvme_files[@]}" 00:01:31.337 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:31.337 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:31.337 + for nvme in "${!nvme_files[@]}" 00:01:31.337 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:31.337 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:31.337 + for nvme in "${!nvme_files[@]}" 00:01:31.337 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:31.337 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:31.337 + for nvme in "${!nvme_files[@]}" 00:01:31.337 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:31.337 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:31.337 + for nvme in "${!nvme_files[@]}" 00:01:31.337 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:31.337 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:31.337 + for nvme in "${!nvme_files[@]}" 00:01:31.337 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:31.337 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:31.337 + for nvme in "${!nvme_files[@]}" 00:01:31.337 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:31.596 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:31.596 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:31.596 + echo 'End stage prepare_nvme.sh' 00:01:31.596 End stage prepare_nvme.sh 00:01:31.606 [Pipeline] sh 00:01:31.887 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:31.887 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:01:31.887 00:01:31.887 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:31.887 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:31.887 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:31.887 HELP=0 00:01:31.887 DRY_RUN=0 00:01:31.887 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:01:31.887 NVME_DISKS_TYPE=nvme,nvme, 00:01:31.887 NVME_AUTO_CREATE=0 00:01:31.887 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:01:31.887 NVME_CMB=,, 00:01:31.887 NVME_PMR=,, 00:01:31.887 NVME_ZNS=,, 00:01:31.887 NVME_MS=,, 00:01:31.887 NVME_FDP=,, 00:01:31.887 SPDK_VAGRANT_DISTRO=fedora39 00:01:31.887 SPDK_VAGRANT_VMCPU=10 00:01:31.887 SPDK_VAGRANT_VMRAM=12288 00:01:31.887 SPDK_VAGRANT_PROVIDER=libvirt 00:01:31.887 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:31.887 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:31.887 SPDK_OPENSTACK_NETWORK=0 00:01:31.887 VAGRANT_PACKAGE_BOX=0 00:01:31.887 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:31.887 FORCE_DISTRO=true 00:01:31.887 VAGRANT_BOX_VERSION= 00:01:31.887 EXTRA_VAGRANTFILES= 00:01:31.887 NIC_MODEL=virtio 00:01:31.887 00:01:31.887 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:31.887 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:34.424 Bringing machine 'default' up with 'libvirt' provider... 00:01:34.424 ==> default: Creating image (snapshot of base box volume). 00:01:34.683 ==> default: Creating domain with the following settings... 00:01:34.683 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732158621_fd60d0f76f4f05669b70 00:01:34.683 ==> default: -- Domain type: kvm 00:01:34.683 ==> default: -- Cpus: 10 00:01:34.683 ==> default: -- Feature: acpi 00:01:34.683 ==> default: -- Feature: apic 00:01:34.683 ==> default: -- Feature: pae 00:01:34.683 ==> default: -- Memory: 12288M 00:01:34.683 ==> default: -- Memory Backing: hugepages: 00:01:34.683 ==> default: -- Management MAC: 00:01:34.683 ==> default: -- Loader: 00:01:34.683 ==> default: -- Nvram: 00:01:34.683 ==> default: -- Base box: spdk/fedora39 00:01:34.683 ==> default: -- Storage pool: default 00:01:34.683 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732158621_fd60d0f76f4f05669b70.img (20G) 00:01:34.683 ==> default: -- Volume Cache: default 00:01:34.683 ==> default: -- Kernel: 00:01:34.683 ==> default: -- Initrd: 00:01:34.683 ==> default: -- Graphics Type: vnc 00:01:34.683 ==> default: -- Graphics Port: -1 00:01:34.683 ==> default: -- Graphics IP: 127.0.0.1 00:01:34.683 ==> default: -- Graphics Password: Not defined 00:01:34.683 ==> default: -- Video Type: cirrus 00:01:34.683 ==> default: -- Video VRAM: 9216 00:01:34.683 ==> default: -- Sound Type: 00:01:34.683 ==> default: -- Keymap: en-us 00:01:34.683 ==> default: -- TPM Path: 00:01:34.683 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:34.683 ==> default: -- Command line args: 00:01:34.683 ==> default: -> value=-device, 00:01:34.683 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:34.683 ==> default: -> value=-drive, 00:01:34.683 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:01:34.683 ==> default: -> value=-device, 00:01:34.683 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:34.683 ==> default: -> value=-device, 00:01:34.683 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:34.683 ==> default: -> value=-drive, 00:01:34.683 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:34.683 ==> default: -> value=-device, 00:01:34.684 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:34.684 ==> default: -> value=-drive, 00:01:34.684 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:34.684 ==> default: -> value=-device, 00:01:34.684 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:34.684 ==> default: -> value=-drive, 00:01:34.684 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:34.684 ==> default: -> value=-device, 00:01:34.684 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:34.684 ==> default: Creating shared folders metadata... 00:01:34.684 ==> default: Starting domain. 00:01:36.589 ==> default: Waiting for domain to get an IP address... 00:01:51.519 ==> default: Waiting for SSH to become available... 00:01:52.473 ==> default: Configuring and enabling network interfaces... 00:01:59.054 default: SSH address: 192.168.121.125:22 00:01:59.054 default: SSH username: vagrant 00:01:59.054 default: SSH auth method: private key 00:02:01.596 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:09.719 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:15.010 ==> default: Mounting SSHFS shared folder... 00:02:17.545 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:17.545 ==> default: Checking Mount.. 00:02:18.921 ==> default: Folder Successfully Mounted! 00:02:18.921 ==> default: Running provisioner: file... 00:02:19.857 default: ~/.gitconfig => .gitconfig 00:02:20.424 00:02:20.424 SUCCESS! 00:02:20.424 00:02:20.424 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:20.424 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:20.424 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:20.424 00:02:20.433 [Pipeline] } 00:02:20.448 [Pipeline] // stage 00:02:20.457 [Pipeline] dir 00:02:20.458 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:02:20.459 [Pipeline] { 00:02:20.471 [Pipeline] catchError 00:02:20.473 [Pipeline] { 00:02:20.485 [Pipeline] sh 00:02:20.766 + vagrant ssh-config --host vagrant 00:02:20.766 + sed -ne /^Host/,$p 00:02:20.766 + tee ssh_conf 00:02:24.064 Host vagrant 00:02:24.064 HostName 192.168.121.125 00:02:24.064 User vagrant 00:02:24.064 Port 22 00:02:24.064 UserKnownHostsFile /dev/null 00:02:24.064 StrictHostKeyChecking no 00:02:24.064 PasswordAuthentication no 00:02:24.064 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:24.064 IdentitiesOnly yes 00:02:24.064 LogLevel FATAL 00:02:24.064 ForwardAgent yes 00:02:24.064 ForwardX11 yes 00:02:24.064 00:02:24.076 [Pipeline] withEnv 00:02:24.079 [Pipeline] { 00:02:24.091 [Pipeline] sh 00:02:24.370 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:24.370 source /etc/os-release 00:02:24.370 [[ -e /image.version ]] && img=$(< /image.version) 00:02:24.370 # Minimal, systemd-like check. 00:02:24.370 if [[ -e /.dockerenv ]]; then 00:02:24.370 # Clear garbage from the node's name: 00:02:24.370 # agt-er_autotest_547-896 -> autotest_547-896 00:02:24.370 # $HOSTNAME is the actual container id 00:02:24.370 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:24.370 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:24.370 # We can assume this is a mount from a host where container is running, 00:02:24.370 # so fetch its hostname to easily identify the target swarm worker. 00:02:24.370 container="$(< /etc/hostname) ($agent)" 00:02:24.370 else 00:02:24.370 # Fallback 00:02:24.370 container=$agent 00:02:24.370 fi 00:02:24.370 fi 00:02:24.370 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:24.370 00:02:24.642 [Pipeline] } 00:02:24.658 [Pipeline] // withEnv 00:02:24.666 [Pipeline] setCustomBuildProperty 00:02:24.681 [Pipeline] stage 00:02:24.683 [Pipeline] { (Tests) 00:02:24.700 [Pipeline] sh 00:02:24.984 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:25.258 [Pipeline] sh 00:02:25.543 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:25.818 [Pipeline] timeout 00:02:25.819 Timeout set to expire in 1 hr 30 min 00:02:25.821 [Pipeline] { 00:02:25.835 [Pipeline] sh 00:02:26.119 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:26.689 HEAD is now at 557f022f6 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:02:26.701 [Pipeline] sh 00:02:26.984 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:27.260 [Pipeline] sh 00:02:27.544 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:27.824 [Pipeline] sh 00:02:28.163 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:28.423 ++ readlink -f spdk_repo 00:02:28.423 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:28.423 + [[ -n /home/vagrant/spdk_repo ]] 00:02:28.423 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:28.423 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:28.423 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:28.423 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:28.423 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:28.423 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:28.423 + cd /home/vagrant/spdk_repo 00:02:28.423 + source /etc/os-release 00:02:28.423 ++ NAME='Fedora Linux' 00:02:28.423 ++ VERSION='39 (Cloud Edition)' 00:02:28.423 ++ ID=fedora 00:02:28.423 ++ VERSION_ID=39 00:02:28.423 ++ VERSION_CODENAME= 00:02:28.423 ++ PLATFORM_ID=platform:f39 00:02:28.423 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:28.423 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:28.423 ++ LOGO=fedora-logo-icon 00:02:28.423 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:28.423 ++ HOME_URL=https://fedoraproject.org/ 00:02:28.423 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:28.423 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:28.423 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:28.423 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:28.423 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:28.423 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:28.423 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:28.423 ++ SUPPORT_END=2024-11-12 00:02:28.423 ++ VARIANT='Cloud Edition' 00:02:28.423 ++ VARIANT_ID=cloud 00:02:28.423 + uname -a 00:02:28.423 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:28.423 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:28.991 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:28.991 Hugepages 00:02:28.991 node hugesize free / total 00:02:28.991 node0 1048576kB 0 / 0 00:02:28.991 node0 2048kB 0 / 0 00:02:28.991 00:02:28.991 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:28.991 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:28.991 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:28.991 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:28.991 + rm -f /tmp/spdk-ld-path 00:02:28.991 + source autorun-spdk.conf 00:02:28.991 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:28.991 ++ SPDK_RUN_ASAN=1 00:02:28.991 ++ SPDK_RUN_UBSAN=1 00:02:28.991 ++ SPDK_TEST_RAID=1 00:02:28.991 ++ SPDK_TEST_NATIVE_DPDK=main 00:02:28.991 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:28.991 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:28.991 ++ RUN_NIGHTLY=1 00:02:28.991 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:28.991 + [[ -n '' ]] 00:02:28.991 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:28.991 + for M in /var/spdk/build-*-manifest.txt 00:02:28.991 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:28.992 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:28.992 + for M in /var/spdk/build-*-manifest.txt 00:02:28.992 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:28.992 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:28.992 + for M in /var/spdk/build-*-manifest.txt 00:02:28.992 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:28.992 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:28.992 ++ uname 00:02:28.992 + [[ Linux == \L\i\n\u\x ]] 00:02:28.992 + sudo dmesg -T 00:02:29.252 + sudo dmesg --clear 00:02:29.252 + dmesg_pid=6166 00:02:29.252 + sudo dmesg -Tw 00:02:29.252 + [[ Fedora Linux == FreeBSD ]] 00:02:29.252 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:29.252 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:29.252 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:29.252 + [[ -x /usr/src/fio-static/fio ]] 00:02:29.252 + export FIO_BIN=/usr/src/fio-static/fio 00:02:29.252 + FIO_BIN=/usr/src/fio-static/fio 00:02:29.252 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:29.252 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:29.252 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:29.252 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:29.252 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:29.252 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:29.252 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:29.252 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:29.252 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:29.252 03:11:16 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:29.252 03:11:16 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:29.252 03:11:16 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:29.252 03:11:16 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:02:29.252 03:11:16 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:02:29.252 03:11:16 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:02:29.252 03:11:16 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_NATIVE_DPDK=main 00:02:29.252 03:11:16 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:29.252 03:11:16 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:29.252 03:11:16 -- spdk_repo/autorun-spdk.conf@8 -- $ RUN_NIGHTLY=1 00:02:29.252 03:11:16 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:29.252 03:11:16 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:29.252 03:11:16 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:29.252 03:11:16 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:29.252 03:11:16 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:29.252 03:11:16 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:29.252 03:11:16 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:29.252 03:11:16 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:29.253 03:11:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.253 03:11:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.253 03:11:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.253 03:11:16 -- paths/export.sh@5 -- $ export PATH 00:02:29.253 03:11:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.253 03:11:16 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:29.253 03:11:16 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:29.513 03:11:16 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732158676.XXXXXX 00:02:29.513 03:11:16 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732158676.6DG9s7 00:02:29.513 03:11:16 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:29.513 03:11:16 -- common/autobuild_common.sh@499 -- $ '[' -n main ']' 00:02:29.513 03:11:16 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:29.513 03:11:16 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:29.513 03:11:16 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:29.513 03:11:16 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:29.513 03:11:16 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:29.513 03:11:16 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:29.513 03:11:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.513 03:11:16 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:29.513 03:11:16 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:29.513 03:11:16 -- pm/common@17 -- $ local monitor 00:02:29.513 03:11:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.513 03:11:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.513 03:11:16 -- pm/common@25 -- $ sleep 1 00:02:29.513 03:11:16 -- pm/common@21 -- $ date +%s 00:02:29.513 03:11:16 -- pm/common@21 -- $ date +%s 00:02:29.513 03:11:16 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732158676 00:02:29.513 03:11:16 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732158676 00:02:29.513 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732158676_collect-cpu-load.pm.log 00:02:29.513 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732158676_collect-vmstat.pm.log 00:02:30.452 03:11:17 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:30.452 03:11:17 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:30.452 03:11:17 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:30.452 03:11:17 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:30.452 03:11:17 -- spdk/autobuild.sh@16 -- $ date -u 00:02:30.452 Thu Nov 21 03:11:17 AM UTC 2024 00:02:30.452 03:11:17 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:30.452 v25.01-pre-219-g557f022f6 00:02:30.452 03:11:17 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:30.452 03:11:17 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:30.452 03:11:17 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:30.452 03:11:17 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:30.452 03:11:17 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.452 ************************************ 00:02:30.452 START TEST asan 00:02:30.452 ************************************ 00:02:30.452 using asan 00:02:30.452 03:11:17 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:30.452 00:02:30.452 real 0m0.000s 00:02:30.452 user 0m0.000s 00:02:30.452 sys 0m0.000s 00:02:30.452 03:11:17 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:30.452 03:11:17 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:30.452 ************************************ 00:02:30.452 END TEST asan 00:02:30.452 ************************************ 00:02:30.452 03:11:17 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:30.452 03:11:17 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:30.452 03:11:17 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:30.452 03:11:17 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:30.452 03:11:17 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.452 ************************************ 00:02:30.452 START TEST ubsan 00:02:30.452 ************************************ 00:02:30.452 using ubsan 00:02:30.452 03:11:17 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:30.452 00:02:30.452 real 0m0.001s 00:02:30.452 user 0m0.001s 00:02:30.452 sys 0m0.000s 00:02:30.452 03:11:17 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:30.452 03:11:17 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:30.452 ************************************ 00:02:30.452 END TEST ubsan 00:02:30.452 ************************************ 00:02:30.712 03:11:18 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:02:30.712 03:11:18 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:30.712 03:11:18 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:30.712 03:11:18 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:30.712 03:11:18 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:30.712 03:11:18 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.712 ************************************ 00:02:30.712 START TEST build_native_dpdk 00:02:30.712 ************************************ 00:02:30.712 03:11:18 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:30.712 f4ccce58c1 doc: allow warnings in Sphinx for DTS 00:02:30.712 0c0cd5ffb0 version: 24.11-rc3 00:02:30.712 8c9a7471a0 dts: add checksum offload test suite 00:02:30.712 bee7cf823c dts: add checksum offload to testpmd shell 00:02:30.712 2eef9a80df dts: add dynamic queue test suite 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.11.0-rc3 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:30.712 03:11:18 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:30.713 03:11:18 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:30.713 03:11:18 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:30.713 03:11:18 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.11.0-rc3 21.11.0 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc3 '<' 21.11.0 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:30.713 03:11:18 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:30.713 patching file config/rte_config.h 00:02:30.713 Hunk #1 succeeded at 72 (offset 13 lines). 00:02:30.713 03:11:18 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 24.11.0-rc3 24.07.0 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc3 '<' 24.07.0 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:30.713 03:11:18 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 24.11.0-rc3 24.07.0 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 24.11.0-rc3 '>=' 24.07.0 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:02:30.713 03:11:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:02:30.714 03:11:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:30.714 03:11:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:02:30.714 03:11:18 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:02:30.714 03:11:18 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:30.714 03:11:18 build_native_dpdk -- scripts/common.sh@367 -- $ return 0 00:02:30.714 03:11:18 build_native_dpdk -- common/autobuild_common.sh@187 -- $ patch -p1 00:02:30.714 patching file drivers/bus/pci/linux/pci_uio.c 00:02:30.714 03:11:18 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:30.714 03:11:18 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:30.714 03:11:18 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:30.714 03:11:18 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:30.714 03:11:18 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:37.329 The Meson build system 00:02:37.329 Version: 1.5.0 00:02:37.329 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:37.329 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:37.329 Build type: native build 00:02:37.329 Project name: DPDK 00:02:37.329 Project version: 24.11.0-rc3 00:02:37.329 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:37.329 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:37.329 Host machine cpu family: x86_64 00:02:37.329 Host machine cpu: x86_64 00:02:37.329 Message: ## Building in Developer Mode ## 00:02:37.329 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:37.329 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:37.329 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:37.329 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:02:37.329 Program cat found: YES (/usr/bin/cat) 00:02:37.329 config/meson.build:122: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:37.329 Compiler for C supports arguments -march=native: YES 00:02:37.329 Checking for size of "void *" : 8 00:02:37.329 Checking for size of "void *" : 8 (cached) 00:02:37.329 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:37.329 Library m found: YES 00:02:37.329 Library numa found: YES 00:02:37.329 Has header "numaif.h" : YES 00:02:37.329 Library fdt found: NO 00:02:37.329 Library execinfo found: NO 00:02:37.329 Has header "execinfo.h" : YES 00:02:37.329 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:37.329 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:37.329 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:37.329 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:37.329 Run-time dependency openssl found: YES 3.1.1 00:02:37.329 Run-time dependency libpcap found: YES 1.10.4 00:02:37.329 Has header "pcap.h" with dependency libpcap: YES 00:02:37.329 Compiler for C supports arguments -Wcast-qual: YES 00:02:37.329 Compiler for C supports arguments -Wdeprecated: YES 00:02:37.329 Compiler for C supports arguments -Wformat: YES 00:02:37.329 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:37.329 Compiler for C supports arguments -Wformat-security: NO 00:02:37.329 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:37.329 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:37.329 Compiler for C supports arguments -Wnested-externs: YES 00:02:37.329 Compiler for C supports arguments -Wold-style-definition: YES 00:02:37.329 Compiler for C supports arguments -Wpointer-arith: YES 00:02:37.329 Compiler for C supports arguments -Wsign-compare: YES 00:02:37.329 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:37.329 Compiler for C supports arguments -Wundef: YES 00:02:37.329 Compiler for C supports arguments -Wwrite-strings: YES 00:02:37.329 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:37.329 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:37.329 Program objdump found: YES (/usr/bin/objdump) 00:02:37.329 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512dq -mavx512bw: YES 00:02:37.329 Checking if "AVX512 checking" compiles: YES 00:02:37.329 Fetching value of define "__AVX512F__" : 1 00:02:37.329 Fetching value of define "__AVX512BW__" : 1 00:02:37.329 Fetching value of define "__AVX512DQ__" : 1 00:02:37.329 Fetching value of define "__AVX512VL__" : 1 00:02:37.329 Fetching value of define "__SSE4_2__" : 1 00:02:37.329 Fetching value of define "__AES__" : 1 00:02:37.329 Fetching value of define "__AVX__" : 1 00:02:37.329 Fetching value of define "__AVX2__" : 1 00:02:37.329 Fetching value of define "__AVX512BW__" : 1 00:02:37.329 Fetching value of define "__AVX512CD__" : 1 00:02:37.329 Fetching value of define "__AVX512DQ__" : 1 00:02:37.329 Fetching value of define "__AVX512F__" : 1 00:02:37.329 Fetching value of define "__AVX512VL__" : 1 00:02:37.329 Fetching value of define "__PCLMUL__" : 1 00:02:37.329 Fetching value of define "__RDRND__" : 1 00:02:37.329 Fetching value of define "__RDSEED__" : 1 00:02:37.329 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:37.329 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:37.329 Message: lib/log: Defining dependency "log" 00:02:37.329 Message: lib/kvargs: Defining dependency "kvargs" 00:02:37.329 Message: lib/argparse: Defining dependency "argparse" 00:02:37.329 Message: lib/telemetry: Defining dependency "telemetry" 00:02:37.329 Checking for function "pthread_attr_setaffinity_np" : YES 00:02:37.329 Checking for function "getentropy" : NO 00:02:37.329 Message: lib/eal: Defining dependency "eal" 00:02:37.329 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:02:37.329 Message: lib/ring: Defining dependency "ring" 00:02:37.329 Message: lib/rcu: Defining dependency "rcu" 00:02:37.329 Message: lib/mempool: Defining dependency "mempool" 00:02:37.329 Message: lib/mbuf: Defining dependency "mbuf" 00:02:37.329 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:37.329 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:37.329 Compiler for C supports arguments -mpclmul: YES 00:02:37.329 Compiler for C supports arguments -maes: YES 00:02:37.329 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:37.329 Message: lib/net: Defining dependency "net" 00:02:37.329 Message: lib/meter: Defining dependency "meter" 00:02:37.329 Message: lib/ethdev: Defining dependency "ethdev" 00:02:37.329 Message: lib/pci: Defining dependency "pci" 00:02:37.329 Message: lib/cmdline: Defining dependency "cmdline" 00:02:37.329 Message: lib/metrics: Defining dependency "metrics" 00:02:37.329 Message: lib/hash: Defining dependency "hash" 00:02:37.329 Message: lib/timer: Defining dependency "timer" 00:02:37.329 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:37.329 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:37.329 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:37.329 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:37.329 Message: lib/acl: Defining dependency "acl" 00:02:37.329 Message: lib/bbdev: Defining dependency "bbdev" 00:02:37.329 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:37.329 Run-time dependency libelf found: YES 0.191 00:02:37.329 Message: lib/bpf: Defining dependency "bpf" 00:02:37.329 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:37.329 Message: lib/compressdev: Defining dependency "compressdev" 00:02:37.329 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:37.330 Message: lib/distributor: Defining dependency "distributor" 00:02:37.330 Message: lib/dmadev: Defining dependency "dmadev" 00:02:37.330 Message: lib/efd: Defining dependency "efd" 00:02:37.330 Message: lib/eventdev: Defining dependency "eventdev" 00:02:37.330 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:37.330 Message: lib/gpudev: Defining dependency "gpudev" 00:02:37.330 Message: lib/gro: Defining dependency "gro" 00:02:37.330 Message: lib/gso: Defining dependency "gso" 00:02:37.330 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:37.330 Message: lib/jobstats: Defining dependency "jobstats" 00:02:37.330 Message: lib/latencystats: Defining dependency "latencystats" 00:02:37.330 Message: lib/lpm: Defining dependency "lpm" 00:02:37.330 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:37.330 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:37.330 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:37.330 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:37.330 Message: lib/member: Defining dependency "member" 00:02:37.330 Message: lib/pcapng: Defining dependency "pcapng" 00:02:37.330 Message: lib/power: Defining dependency "power" 00:02:37.330 Message: lib/rawdev: Defining dependency "rawdev" 00:02:37.330 Message: lib/regexdev: Defining dependency "regexdev" 00:02:37.330 Message: lib/mldev: Defining dependency "mldev" 00:02:37.330 Message: lib/rib: Defining dependency "rib" 00:02:37.330 Message: lib/reorder: Defining dependency "reorder" 00:02:37.330 Message: lib/sched: Defining dependency "sched" 00:02:37.330 Message: lib/security: Defining dependency "security" 00:02:37.330 Message: lib/stack: Defining dependency "stack" 00:02:37.330 Has header "linux/userfaultfd.h" : YES 00:02:37.330 Message: lib/vhost: Defining dependency "vhost" 00:02:37.330 Message: lib/ipsec: Defining dependency "ipsec" 00:02:37.330 Message: lib/pdcp: Defining dependency "pdcp" 00:02:37.330 Message: lib/fib: Defining dependency "fib" 00:02:37.330 Message: lib/port: Defining dependency "port" 00:02:37.330 Message: lib/pdump: Defining dependency "pdump" 00:02:37.330 Message: lib/table: Defining dependency "table" 00:02:37.330 Message: lib/pipeline: Defining dependency "pipeline" 00:02:37.330 Message: lib/graph: Defining dependency "graph" 00:02:37.330 Message: lib/node: Defining dependency "node" 00:02:37.330 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:37.330 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:37.330 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:37.330 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:37.330 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:37.330 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:37.330 Compiler for C supports arguments -Wno-unused-value: YES 00:02:37.330 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:37.330 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:37.330 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:37.330 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:37.330 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:37.330 Message: drivers/power/acpi: Defining dependency "power_acpi" 00:02:37.330 Message: drivers/power/amd_pstate: Defining dependency "power_amd_pstate" 00:02:37.330 Message: drivers/power/cppc: Defining dependency "power_cppc" 00:02:37.330 Message: drivers/power/intel_pstate: Defining dependency "power_intel_pstate" 00:02:37.330 Message: drivers/power/intel_uncore: Defining dependency "power_intel_uncore" 00:02:37.330 Message: drivers/power/kvm_vm: Defining dependency "power_kvm_vm" 00:02:37.330 Has header "sys/epoll.h" : YES 00:02:37.330 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:37.330 Configuring doxy-api-html.conf using configuration 00:02:37.330 Configuring doxy-api-man.conf using configuration 00:02:37.330 Program mandb found: YES (/usr/bin/mandb) 00:02:37.330 Program sphinx-build found: NO 00:02:37.330 Program sphinx-build found: NO 00:02:37.330 Configuring rte_build_config.h using configuration 00:02:37.330 Message: 00:02:37.330 ================= 00:02:37.330 Applications Enabled 00:02:37.330 ================= 00:02:37.330 00:02:37.330 apps: 00:02:37.330 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:37.330 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:37.330 test-pmd, test-regex, test-sad, test-security-perf, 00:02:37.330 00:02:37.330 Message: 00:02:37.330 ================= 00:02:37.330 Libraries Enabled 00:02:37.330 ================= 00:02:37.330 00:02:37.330 libs: 00:02:37.330 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:02:37.330 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:02:37.330 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:02:37.330 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:02:37.330 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:02:37.330 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:02:37.330 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:02:37.330 graph, node, 00:02:37.330 00:02:37.330 Message: 00:02:37.330 =============== 00:02:37.330 Drivers Enabled 00:02:37.330 =============== 00:02:37.330 00:02:37.330 common: 00:02:37.330 00:02:37.330 bus: 00:02:37.330 pci, vdev, 00:02:37.330 mempool: 00:02:37.330 ring, 00:02:37.330 dma: 00:02:37.330 00:02:37.330 net: 00:02:37.330 i40e, 00:02:37.330 raw: 00:02:37.330 00:02:37.330 crypto: 00:02:37.330 00:02:37.330 compress: 00:02:37.330 00:02:37.330 regex: 00:02:37.330 00:02:37.330 ml: 00:02:37.330 00:02:37.330 vdpa: 00:02:37.330 00:02:37.330 event: 00:02:37.330 00:02:37.330 baseband: 00:02:37.330 00:02:37.330 gpu: 00:02:37.330 00:02:37.330 power: 00:02:37.330 acpi, amd_pstate, cppc, intel_pstate, intel_uncore, kvm_vm, 00:02:37.330 00:02:37.330 Message: 00:02:37.330 ================= 00:02:37.330 Content Skipped 00:02:37.330 ================= 00:02:37.330 00:02:37.330 apps: 00:02:37.330 00:02:37.330 libs: 00:02:37.330 00:02:37.330 drivers: 00:02:37.330 common/cpt: not in enabled drivers build config 00:02:37.330 common/dpaax: not in enabled drivers build config 00:02:37.330 common/iavf: not in enabled drivers build config 00:02:37.330 common/idpf: not in enabled drivers build config 00:02:37.330 common/ionic: not in enabled drivers build config 00:02:37.330 common/mvep: not in enabled drivers build config 00:02:37.330 common/octeontx: not in enabled drivers build config 00:02:37.330 bus/auxiliary: not in enabled drivers build config 00:02:37.330 bus/cdx: not in enabled drivers build config 00:02:37.330 bus/dpaa: not in enabled drivers build config 00:02:37.330 bus/fslmc: not in enabled drivers build config 00:02:37.330 bus/ifpga: not in enabled drivers build config 00:02:37.330 bus/platform: not in enabled drivers build config 00:02:37.330 bus/uacce: not in enabled drivers build config 00:02:37.330 bus/vmbus: not in enabled drivers build config 00:02:37.330 common/cnxk: not in enabled drivers build config 00:02:37.330 common/mlx5: not in enabled drivers build config 00:02:37.330 common/nfp: not in enabled drivers build config 00:02:37.330 common/nitrox: not in enabled drivers build config 00:02:37.330 common/qat: not in enabled drivers build config 00:02:37.330 common/sfc_efx: not in enabled drivers build config 00:02:37.330 mempool/bucket: not in enabled drivers build config 00:02:37.330 mempool/cnxk: not in enabled drivers build config 00:02:37.330 mempool/dpaa: not in enabled drivers build config 00:02:37.330 mempool/dpaa2: not in enabled drivers build config 00:02:37.330 mempool/octeontx: not in enabled drivers build config 00:02:37.330 mempool/stack: not in enabled drivers build config 00:02:37.330 dma/cnxk: not in enabled drivers build config 00:02:37.330 dma/dpaa: not in enabled drivers build config 00:02:37.330 dma/dpaa2: not in enabled drivers build config 00:02:37.330 dma/hisilicon: not in enabled drivers build config 00:02:37.330 dma/idxd: not in enabled drivers build config 00:02:37.330 dma/ioat: not in enabled drivers build config 00:02:37.330 dma/odm: not in enabled drivers build config 00:02:37.330 dma/skeleton: not in enabled drivers build config 00:02:37.330 net/af_packet: not in enabled drivers build config 00:02:37.330 net/af_xdp: not in enabled drivers build config 00:02:37.330 net/ark: not in enabled drivers build config 00:02:37.330 net/atlantic: not in enabled drivers build config 00:02:37.330 net/avp: not in enabled drivers build config 00:02:37.330 net/axgbe: not in enabled drivers build config 00:02:37.331 net/bnx2x: not in enabled drivers build config 00:02:37.331 net/bnxt: not in enabled drivers build config 00:02:37.331 net/bonding: not in enabled drivers build config 00:02:37.331 net/cnxk: not in enabled drivers build config 00:02:37.331 net/cpfl: not in enabled drivers build config 00:02:37.331 net/cxgbe: not in enabled drivers build config 00:02:37.331 net/dpaa: not in enabled drivers build config 00:02:37.331 net/dpaa2: not in enabled drivers build config 00:02:37.331 net/e1000: not in enabled drivers build config 00:02:37.331 net/ena: not in enabled drivers build config 00:02:37.331 net/enetc: not in enabled drivers build config 00:02:37.331 net/enetfec: not in enabled drivers build config 00:02:37.331 net/enic: not in enabled drivers build config 00:02:37.331 net/failsafe: not in enabled drivers build config 00:02:37.331 net/fm10k: not in enabled drivers build config 00:02:37.331 net/gve: not in enabled drivers build config 00:02:37.331 net/hinic: not in enabled drivers build config 00:02:37.331 net/hns3: not in enabled drivers build config 00:02:37.331 net/iavf: not in enabled drivers build config 00:02:37.331 net/ice: not in enabled drivers build config 00:02:37.331 net/idpf: not in enabled drivers build config 00:02:37.331 net/igc: not in enabled drivers build config 00:02:37.331 net/ionic: not in enabled drivers build config 00:02:37.331 net/ipn3ke: not in enabled drivers build config 00:02:37.331 net/ixgbe: not in enabled drivers build config 00:02:37.331 net/mana: not in enabled drivers build config 00:02:37.331 net/memif: not in enabled drivers build config 00:02:37.331 net/mlx4: not in enabled drivers build config 00:02:37.331 net/mlx5: not in enabled drivers build config 00:02:37.331 net/mvneta: not in enabled drivers build config 00:02:37.331 net/mvpp2: not in enabled drivers build config 00:02:37.331 net/netvsc: not in enabled drivers build config 00:02:37.331 net/nfb: not in enabled drivers build config 00:02:37.331 net/nfp: not in enabled drivers build config 00:02:37.331 net/ngbe: not in enabled drivers build config 00:02:37.331 net/ntnic: not in enabled drivers build config 00:02:37.331 net/null: not in enabled drivers build config 00:02:37.331 net/octeontx: not in enabled drivers build config 00:02:37.331 net/octeon_ep: not in enabled drivers build config 00:02:37.331 net/pcap: not in enabled drivers build config 00:02:37.331 net/pfe: not in enabled drivers build config 00:02:37.331 net/qede: not in enabled drivers build config 00:02:37.331 net/r8169: not in enabled drivers build config 00:02:37.331 net/ring: not in enabled drivers build config 00:02:37.331 net/sfc: not in enabled drivers build config 00:02:37.331 net/softnic: not in enabled drivers build config 00:02:37.331 net/tap: not in enabled drivers build config 00:02:37.331 net/thunderx: not in enabled drivers build config 00:02:37.331 net/txgbe: not in enabled drivers build config 00:02:37.331 net/vdev_netvsc: not in enabled drivers build config 00:02:37.331 net/vhost: not in enabled drivers build config 00:02:37.331 net/virtio: not in enabled drivers build config 00:02:37.331 net/vmxnet3: not in enabled drivers build config 00:02:37.331 net/zxdh: not in enabled drivers build config 00:02:37.331 raw/cnxk_bphy: not in enabled drivers build config 00:02:37.331 raw/cnxk_gpio: not in enabled drivers build config 00:02:37.331 raw/cnxk_rvu_lf: not in enabled drivers build config 00:02:37.331 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:37.331 raw/gdtc: not in enabled drivers build config 00:02:37.331 raw/ifpga: not in enabled drivers build config 00:02:37.331 raw/ntb: not in enabled drivers build config 00:02:37.331 raw/skeleton: not in enabled drivers build config 00:02:37.331 crypto/armv8: not in enabled drivers build config 00:02:37.331 crypto/bcmfs: not in enabled drivers build config 00:02:37.331 crypto/caam_jr: not in enabled drivers build config 00:02:37.331 crypto/ccp: not in enabled drivers build config 00:02:37.331 crypto/cnxk: not in enabled drivers build config 00:02:37.331 crypto/dpaa_sec: not in enabled drivers build config 00:02:37.331 crypto/dpaa2_sec: not in enabled drivers build config 00:02:37.331 crypto/ionic: not in enabled drivers build config 00:02:37.331 crypto/ipsec_mb: not in enabled drivers build config 00:02:37.331 crypto/mlx5: not in enabled drivers build config 00:02:37.331 crypto/mvsam: not in enabled drivers build config 00:02:37.331 crypto/nitrox: not in enabled drivers build config 00:02:37.331 crypto/null: not in enabled drivers build config 00:02:37.331 crypto/octeontx: not in enabled drivers build config 00:02:37.331 crypto/openssl: not in enabled drivers build config 00:02:37.331 crypto/scheduler: not in enabled drivers build config 00:02:37.331 crypto/uadk: not in enabled drivers build config 00:02:37.331 crypto/virtio: not in enabled drivers build config 00:02:37.331 compress/isal: not in enabled drivers build config 00:02:37.331 compress/mlx5: not in enabled drivers build config 00:02:37.331 compress/nitrox: not in enabled drivers build config 00:02:37.331 compress/octeontx: not in enabled drivers build config 00:02:37.331 compress/uadk: not in enabled drivers build config 00:02:37.331 compress/zlib: not in enabled drivers build config 00:02:37.331 regex/mlx5: not in enabled drivers build config 00:02:37.331 regex/cn9k: not in enabled drivers build config 00:02:37.331 ml/cnxk: not in enabled drivers build config 00:02:37.331 vdpa/ifc: not in enabled drivers build config 00:02:37.331 vdpa/mlx5: not in enabled drivers build config 00:02:37.331 vdpa/nfp: not in enabled drivers build config 00:02:37.331 vdpa/sfc: not in enabled drivers build config 00:02:37.331 event/cnxk: not in enabled drivers build config 00:02:37.331 event/dlb2: not in enabled drivers build config 00:02:37.331 event/dpaa: not in enabled drivers build config 00:02:37.331 event/dpaa2: not in enabled drivers build config 00:02:37.331 event/dsw: not in enabled drivers build config 00:02:37.331 event/opdl: not in enabled drivers build config 00:02:37.331 event/skeleton: not in enabled drivers build config 00:02:37.331 event/sw: not in enabled drivers build config 00:02:37.331 event/octeontx: not in enabled drivers build config 00:02:37.331 baseband/acc: not in enabled drivers build config 00:02:37.331 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:37.331 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:37.331 baseband/la12xx: not in enabled drivers build config 00:02:37.331 baseband/null: not in enabled drivers build config 00:02:37.331 baseband/turbo_sw: not in enabled drivers build config 00:02:37.331 gpu/cuda: not in enabled drivers build config 00:02:37.331 power/amd_uncore: not in enabled drivers build config 00:02:37.331 00:02:37.331 00:02:37.331 Message: DPDK build config complete: 00:02:37.331 source path = "/home/vagrant/spdk_repo/dpdk" 00:02:37.331 build path = "/home/vagrant/spdk_repo/dpdk/build-tmp" 00:02:37.331 Build targets in project: 246 00:02:37.331 00:02:37.331 DPDK 24.11.0-rc3 00:02:37.331 00:02:37.331 User defined options 00:02:37.331 libdir : lib 00:02:37.331 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:37.331 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:37.331 c_link_args : 00:02:37.331 enable_docs : false 00:02:37.331 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:37.331 enable_kmods : false 00:02:38.269 machine : native 00:02:38.269 tests : false 00:02:38.269 00:02:38.269 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:38.269 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:38.269 03:11:25 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:38.269 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:38.528 [1/766] Compiling C object lib/librte_log.a.p/log_log_syslog.c.o 00:02:38.528 [2/766] Compiling C object lib/librte_log.a.p/log_log_timestamp.c.o 00:02:38.528 [3/766] Compiling C object lib/librte_log.a.p/log_log_journal.c.o 00:02:38.528 [4/766] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:38.528 [5/766] Linking static target lib/librte_kvargs.a 00:02:38.528 [6/766] Compiling C object lib/librte_log.a.p/log_log_color.c.o 00:02:38.528 [7/766] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:38.528 [8/766] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:38.528 [9/766] Linking static target lib/librte_log.a 00:02:38.786 [10/766] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:02:38.786 [11/766] Linking static target lib/librte_argparse.a 00:02:38.786 [12/766] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.786 [13/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:39.045 [14/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:39.045 [15/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:39.045 [16/766] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.045 [17/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:39.045 [18/766] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:39.045 [19/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:39.045 [20/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:39.045 [21/766] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.045 [22/766] Linking target lib/librte_log.so.25.0 00:02:39.045 [23/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:39.303 [24/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:39.303 [25/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:39.303 [26/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore_var.c.o 00:02:39.561 [27/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:39.561 [28/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:39.561 [29/766] Generating symbol file lib/librte_log.so.25.0.p/librte_log.so.25.0.symbols 00:02:39.561 [30/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:39.561 [31/766] Linking target lib/librte_kvargs.so.25.0 00:02:39.561 [32/766] Linking target lib/librte_argparse.so.25.0 00:02:39.561 [33/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:39.561 [34/766] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:39.561 [35/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:39.561 [36/766] Linking static target lib/librte_telemetry.a 00:02:39.819 [37/766] Generating symbol file lib/librte_kvargs.so.25.0.p/librte_kvargs.so.25.0.symbols 00:02:39.819 [38/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:39.819 [39/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:39.819 [40/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:40.078 [41/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:40.078 [42/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:40.078 [43/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:40.078 [44/766] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.078 [45/766] Linking target lib/librte_telemetry.so.25.0 00:02:40.078 [46/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:40.078 [47/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:40.336 [48/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:40.336 [49/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:40.336 [50/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:40.336 [51/766] Generating symbol file lib/librte_telemetry.so.25.0.p/librte_telemetry.so.25.0.symbols 00:02:40.336 [52/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_bitset.c.o 00:02:40.336 [53/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:40.336 [54/766] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:40.594 [55/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:40.594 [56/766] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:40.594 [57/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:40.853 [58/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:40.853 [59/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:40.853 [60/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:40.853 [61/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:40.853 [62/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:40.853 [63/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:41.111 [64/766] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:41.111 [65/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:41.111 [66/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:41.111 [67/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:41.111 [68/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:41.111 [69/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:41.370 [70/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:41.370 [71/766] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:41.370 [72/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:41.370 [73/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:41.370 [74/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:41.370 [75/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:41.629 [76/766] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:41.629 [77/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:41.629 [78/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:41.629 [79/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:41.886 [80/766] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:41.886 [81/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:41.886 [82/766] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:41.886 [83/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:41.886 [84/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:41.886 [85/766] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:41.886 [86/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:42.143 [87/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:42.143 [88/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:42.143 [89/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:42.143 [90/766] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:42.143 [91/766] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:02:42.401 [92/766] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:42.401 [93/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:42.401 [94/766] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:42.401 [95/766] Linking static target lib/librte_ring.a 00:02:42.658 [96/766] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:42.658 [97/766] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.658 [98/766] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:42.658 [99/766] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:42.658 [100/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:42.916 [101/766] Linking static target lib/librte_eal.a 00:02:42.916 [102/766] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:42.916 [103/766] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:43.175 [104/766] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:43.175 [105/766] Linking static target lib/librte_mempool.a 00:02:43.175 [106/766] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:43.175 [107/766] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:43.433 [108/766] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:43.433 [109/766] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:43.433 [110/766] Linking static target lib/librte_rcu.a 00:02:43.433 [111/766] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:43.433 [112/766] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:43.433 [113/766] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:43.433 [114/766] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:43.690 [115/766] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.690 [116/766] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:43.690 [117/766] Linking static target lib/librte_net.a 00:02:43.690 [118/766] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:43.690 [119/766] Linking static target lib/librte_meter.a 00:02:43.690 [120/766] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:43.948 [121/766] Linking static target lib/librte_mbuf.a 00:02:43.948 [122/766] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.948 [123/766] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:43.948 [124/766] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:43.948 [125/766] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.948 [126/766] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.207 [127/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:44.465 [128/766] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:44.465 [129/766] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.723 [130/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:44.981 [131/766] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:44.981 [132/766] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:45.239 [133/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:45.239 [134/766] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:45.497 [135/766] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:45.497 [136/766] Linking static target lib/librte_pci.a 00:02:45.497 [137/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:45.755 [138/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:45.755 [139/766] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:45.755 [140/766] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.755 [141/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:45.755 [142/766] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:45.755 [143/766] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:46.013 [144/766] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:46.013 [145/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:46.013 [146/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:46.013 [147/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:46.013 [148/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:46.013 [149/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:46.013 [150/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:46.013 [151/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:46.272 [152/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:46.272 [153/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:46.272 [154/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:46.530 [155/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:46.530 [156/766] Linking static target lib/librte_cmdline.a 00:02:46.530 [157/766] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:46.530 [158/766] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:46.530 [159/766] Linking static target lib/librte_metrics.a 00:02:46.789 [160/766] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:46.789 [161/766] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:47.047 [162/766] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:47.047 [163/766] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.306 [164/766] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:47.306 [165/766] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gf2_poly_math.c.o 00:02:47.565 [166/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:47.824 [167/766] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.824 [168/766] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:47.824 [169/766] Linking static target lib/librte_timer.a 00:02:48.082 [170/766] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:48.341 [171/766] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:48.341 [172/766] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.341 [173/766] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:48.341 [174/766] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:49.278 [175/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:49.278 [176/766] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:49.278 [177/766] Linking static target lib/librte_bitratestats.a 00:02:49.278 [178/766] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:49.278 [179/766] Linking static target lib/librte_bbdev.a 00:02:49.537 [180/766] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.537 [181/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:49.537 [182/766] Linking static target lib/librte_ethdev.a 00:02:49.796 [183/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:49.796 [184/766] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:50.055 [185/766] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.314 [186/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:50.314 [187/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:50.314 [188/766] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.314 [189/766] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:50.595 [190/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:50.595 [191/766] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:50.595 [192/766] Linking static target lib/acl/libavx2_tmp.a 00:02:50.595 [193/766] Linking target lib/librte_eal.so.25.0 00:02:50.855 [194/766] Generating symbol file lib/librte_eal.so.25.0.p/librte_eal.so.25.0.symbols 00:02:50.855 [195/766] Linking target lib/librte_ring.so.25.0 00:02:51.114 [196/766] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:51.114 [197/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:51.114 [198/766] Generating symbol file lib/librte_ring.so.25.0.p/librte_ring.so.25.0.symbols 00:02:51.114 [199/766] Linking target lib/librte_pci.so.25.0 00:02:51.114 [200/766] Linking target lib/librte_meter.so.25.0 00:02:51.114 [201/766] Linking target lib/librte_rcu.so.25.0 00:02:51.114 [202/766] Linking target lib/librte_mempool.so.25.0 00:02:51.114 [203/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:51.373 [204/766] Generating symbol file lib/librte_mempool.so.25.0.p/librte_mempool.so.25.0.symbols 00:02:51.373 [205/766] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:51.373 [206/766] Generating symbol file lib/librte_meter.so.25.0.p/librte_meter.so.25.0.symbols 00:02:51.373 [207/766] Generating symbol file lib/librte_rcu.so.25.0.p/librte_rcu.so.25.0.symbols 00:02:51.373 [208/766] Linking static target lib/librte_hash.a 00:02:51.373 [209/766] Linking static target lib/librte_cfgfile.a 00:02:51.373 [210/766] Linking target lib/librte_mbuf.so.25.0 00:02:51.373 [211/766] Linking target lib/librte_timer.so.25.0 00:02:51.373 [212/766] Generating symbol file lib/librte_pci.so.25.0.p/librte_pci.so.25.0.symbols 00:02:51.373 [213/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:51.373 [214/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:51.632 [215/766] Generating symbol file lib/librte_mbuf.so.25.0.p/librte_mbuf.so.25.0.symbols 00:02:51.632 [216/766] Generating symbol file lib/librte_timer.so.25.0.p/librte_timer.so.25.0.symbols 00:02:51.632 [217/766] Linking target lib/librte_net.so.25.0 00:02:51.632 [218/766] Linking target lib/librte_bbdev.so.25.0 00:02:51.890 [219/766] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.890 [220/766] Generating symbol file lib/librte_net.so.25.0.p/librte_net.so.25.0.symbols 00:02:51.890 [221/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:51.890 [222/766] Linking static target lib/librte_bpf.a 00:02:51.890 [223/766] Linking target lib/librte_cfgfile.so.25.0 00:02:51.890 [224/766] Linking target lib/librte_cmdline.so.25.0 00:02:52.148 [225/766] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:52.148 [226/766] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:52.407 [227/766] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.407 [228/766] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.407 [229/766] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:52.407 [230/766] Linking target lib/librte_hash.so.25.0 00:02:52.407 [231/766] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:52.407 [232/766] Linking static target lib/librte_compressdev.a 00:02:52.666 [233/766] Generating symbol file lib/librte_hash.so.25.0.p/librte_hash.so.25.0.symbols 00:02:52.924 [234/766] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:52.924 [235/766] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:52.924 [236/766] Linking static target lib/librte_acl.a 00:02:52.924 [237/766] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:53.183 [238/766] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:53.183 [239/766] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.183 [240/766] Linking target lib/librte_compressdev.so.25.0 00:02:53.442 [241/766] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:53.442 [242/766] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.442 [243/766] Linking target lib/librte_acl.so.25.0 00:02:53.442 [244/766] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:53.442 [245/766] Linking static target lib/librte_dmadev.a 00:02:53.701 [246/766] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:53.701 [247/766] Linking static target lib/librte_distributor.a 00:02:53.701 [248/766] Generating symbol file lib/librte_acl.so.25.0.p/librte_acl.so.25.0.symbols 00:02:53.701 [249/766] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:53.961 [250/766] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.961 [251/766] Linking target lib/librte_distributor.so.25.0 00:02:53.961 [252/766] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:54.220 [253/766] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.220 [254/766] Linking target lib/librte_dmadev.so.25.0 00:02:54.220 [255/766] Generating symbol file lib/librte_dmadev.so.25.0.p/librte_dmadev.so.25.0.symbols 00:02:54.479 [256/766] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:54.479 [257/766] Linking static target lib/librte_cryptodev.a 00:02:54.479 [258/766] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:54.479 [259/766] Linking static target lib/librte_efd.a 00:02:54.737 [260/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:54.996 [261/766] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.996 [262/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:54.996 [263/766] Linking target lib/librte_efd.so.25.0 00:02:55.564 [264/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:55.564 [265/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:55.822 [266/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:55.822 [267/766] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:55.822 [268/766] Linking static target lib/librte_gpudev.a 00:02:55.822 [269/766] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:55.822 [270/766] Linking static target lib/librte_dispatcher.a 00:02:56.099 [271/766] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:56.099 [272/766] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.377 [273/766] Linking target lib/librte_cryptodev.so.25.0 00:02:56.377 [274/766] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:56.635 [275/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:56.635 [276/766] Generating symbol file lib/librte_cryptodev.so.25.0.p/librte_cryptodev.so.25.0.symbols 00:02:56.635 [277/766] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.635 [278/766] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:56.635 [279/766] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:56.894 [280/766] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.894 [281/766] Linking target lib/librte_gpudev.so.25.0 00:02:56.894 [282/766] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.894 [283/766] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:57.151 [284/766] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:57.151 [285/766] Linking target lib/librte_ethdev.so.25.0 00:02:57.409 [286/766] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:57.409 [287/766] Generating symbol file lib/librte_ethdev.so.25.0.p/librte_ethdev.so.25.0.symbols 00:02:57.409 [288/766] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:57.409 [289/766] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:57.409 [290/766] Linking target lib/librte_metrics.so.25.0 00:02:57.409 [291/766] Linking target lib/librte_bpf.so.25.0 00:02:57.409 [292/766] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:57.409 [293/766] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:57.409 [294/766] Linking static target lib/librte_gso.a 00:02:57.668 [295/766] Generating symbol file lib/librte_metrics.so.25.0.p/librte_metrics.so.25.0.symbols 00:02:57.668 [296/766] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:57.668 [297/766] Generating symbol file lib/librte_bpf.so.25.0.p/librte_bpf.so.25.0.symbols 00:02:57.668 [298/766] Linking static target lib/librte_gro.a 00:02:57.668 [299/766] Linking target lib/librte_bitratestats.so.25.0 00:02:57.926 [300/766] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.927 [301/766] Linking target lib/librte_gso.so.25.0 00:02:57.927 [302/766] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:57.927 [303/766] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.186 [304/766] Linking target lib/librte_gro.so.25.0 00:02:58.186 [305/766] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:58.186 [306/766] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:58.444 [307/766] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:58.444 [308/766] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:58.445 [309/766] Linking static target lib/librte_jobstats.a 00:02:58.704 [310/766] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:58.704 [311/766] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:58.704 [312/766] Linking static target lib/librte_latencystats.a 00:02:58.704 [313/766] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:58.963 [314/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:58.963 [315/766] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.963 [316/766] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:58.963 [317/766] Linking static target lib/librte_eventdev.a 00:02:58.963 [318/766] Linking static target lib/librte_ip_frag.a 00:02:58.963 [319/766] Linking target lib/librte_latencystats.so.25.0 00:02:58.963 [320/766] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.963 [321/766] Linking target lib/librte_jobstats.so.25.0 00:02:58.964 [322/766] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:58.964 [323/766] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:59.222 [324/766] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:59.222 [325/766] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:59.222 [326/766] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:59.222 [327/766] Linking static target lib/librte_lpm.a 00:02:59.222 [328/766] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.482 [329/766] Compiling C object lib/librte_power.a.p/power_rte_power_qos.c.o 00:02:59.482 [330/766] Linking target lib/librte_ip_frag.so.25.0 00:02:59.741 [331/766] Generating symbol file lib/librte_ip_frag.so.25.0.p/librte_ip_frag.so.25.0.symbols 00:02:59.741 [332/766] Compiling C object lib/librte_power.a.p/power_rte_power_cpufreq.c.o 00:02:59.741 [333/766] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.741 [334/766] Linking target lib/librte_lpm.so.25.0 00:03:00.001 [335/766] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:00.001 [336/766] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:00.001 [337/766] Generating symbol file lib/librte_lpm.so.25.0.p/librte_lpm.so.25.0.symbols 00:03:00.001 [338/766] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:00.001 [339/766] Linking static target lib/librte_power.a 00:03:00.260 [340/766] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:00.260 [341/766] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:00.260 [342/766] Linking static target lib/librte_pcapng.a 00:03:00.519 [343/766] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:00.519 [344/766] Linking static target lib/librte_rawdev.a 00:03:00.519 [345/766] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:00.778 [346/766] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:00.778 [347/766] Linking static target lib/librte_regexdev.a 00:03:00.778 [348/766] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.778 [349/766] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:00.778 [350/766] Linking target lib/librte_pcapng.so.25.0 00:03:00.778 [351/766] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:01.037 [352/766] Generating symbol file lib/librte_pcapng.so.25.0.p/librte_pcapng.so.25.0.symbols 00:03:01.037 [353/766] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.296 [354/766] Linking target lib/librte_rawdev.so.25.0 00:03:01.296 [355/766] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.296 [356/766] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:01.554 [357/766] Linking target lib/librte_power.so.25.0 00:03:01.554 [358/766] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:01.554 [359/766] Linking static target lib/librte_mldev.a 00:03:01.554 [360/766] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:01.554 [361/766] Linking static target lib/librte_member.a 00:03:01.554 [362/766] Generating symbol file lib/librte_power.so.25.0.p/librte_power.so.25.0.symbols 00:03:01.814 [363/766] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:01.814 [364/766] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:01.814 [365/766] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.814 [366/766] Linking static target lib/librte_rib.a 00:03:01.814 [367/766] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:01.814 [368/766] Linking target lib/librte_regexdev.so.25.0 00:03:02.074 [369/766] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.074 [370/766] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.074 [371/766] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:02.074 [372/766] Linking target lib/librte_eventdev.so.25.0 00:03:02.074 [373/766] Linking target lib/librte_member.so.25.0 00:03:02.333 [374/766] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:02.333 [375/766] Linking static target lib/librte_reorder.a 00:03:02.333 [376/766] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:02.333 [377/766] Generating symbol file lib/librte_eventdev.so.25.0.p/librte_eventdev.so.25.0.symbols 00:03:02.333 [378/766] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:02.591 [379/766] Linking target lib/librte_dispatcher.so.25.0 00:03:02.591 [380/766] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.591 [381/766] Linking target lib/librte_rib.so.25.0 00:03:02.851 [382/766] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:02.851 [383/766] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.851 [384/766] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:02.851 [385/766] Linking static target lib/librte_stack.a 00:03:02.851 [386/766] Generating symbol file lib/librte_rib.so.25.0.p/librte_rib.so.25.0.symbols 00:03:02.851 [387/766] Linking target lib/librte_reorder.so.25.0 00:03:02.851 [388/766] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:02.851 [389/766] Linking static target lib/librte_security.a 00:03:02.851 [390/766] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:03.110 [391/766] Generating symbol file lib/librte_reorder.so.25.0.p/librte_reorder.so.25.0.symbols 00:03:03.110 [392/766] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.369 [393/766] Linking target lib/librte_stack.so.25.0 00:03:03.369 [394/766] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:03.369 [395/766] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.627 [396/766] Linking target lib/librte_security.so.25.0 00:03:03.627 [397/766] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:03.627 [398/766] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.627 [399/766] Linking static target lib/librte_sched.a 00:03:03.627 [400/766] Linking target lib/librte_mldev.so.25.0 00:03:03.627 [401/766] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:03.627 [402/766] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:03.627 [403/766] Generating symbol file lib/librte_security.so.25.0.p/librte_security.so.25.0.symbols 00:03:04.194 [404/766] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.194 [405/766] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:04.194 [406/766] Linking target lib/librte_sched.so.25.0 00:03:04.453 [407/766] Generating symbol file lib/librte_sched.so.25.0.p/librte_sched.so.25.0.symbols 00:03:04.711 [408/766] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:04.711 [409/766] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:04.711 [410/766] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:04.970 [411/766] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:05.229 [412/766] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:05.487 [413/766] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:05.745 [414/766] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:05.745 [415/766] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:05.745 [416/766] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:06.004 [417/766] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:06.262 [418/766] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:06.262 [419/766] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:06.522 [420/766] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:06.522 [421/766] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:06.522 [422/766] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:03:06.781 [423/766] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:07.040 [424/766] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:07.040 [425/766] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:07.040 [426/766] Linking static target lib/librte_ipsec.a 00:03:07.040 [427/766] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:07.299 [428/766] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:07.299 [429/766] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.558 [430/766] Linking target lib/librte_ipsec.so.25.0 00:03:07.558 [431/766] Linking static target lib/librte_pdcp.a 00:03:07.558 [432/766] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:07.817 [433/766] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:07.817 [434/766] Generating symbol file lib/librte_ipsec.so.25.0.p/librte_ipsec.so.25.0.symbols 00:03:07.817 [435/766] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:07.817 [436/766] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:07.817 [437/766] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.817 [438/766] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:08.075 [439/766] Linking target lib/librte_pdcp.so.25.0 00:03:08.075 [440/766] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:08.075 [441/766] Linking static target lib/librte_fib.a 00:03:08.075 [442/766] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:08.333 [443/766] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.592 [444/766] Linking target lib/librte_fib.so.25.0 00:03:08.592 [445/766] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:08.850 [446/766] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:08.850 [447/766] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:08.850 [448/766] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:08.850 [449/766] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:09.109 [450/766] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:09.367 [451/766] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:09.625 [452/766] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:09.625 [453/766] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:09.625 [454/766] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:09.884 [455/766] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:09.884 [456/766] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:09.884 [457/766] Linking static target lib/librte_port.a 00:03:09.884 [458/766] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:09.884 [459/766] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:09.884 [460/766] Linking static target lib/librte_pdump.a 00:03:10.143 [461/766] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:10.143 [462/766] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:10.143 [463/766] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:10.143 [464/766] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.401 [465/766] Linking target lib/librte_pdump.so.25.0 00:03:10.401 [466/766] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.659 [467/766] Linking target lib/librte_port.so.25.0 00:03:10.659 [468/766] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:10.659 [469/766] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:10.659 [470/766] Generating symbol file lib/librte_port.so.25.0.p/librte_port.so.25.0.symbols 00:03:10.659 [471/766] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:03:10.918 [472/766] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:11.178 [473/766] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:11.178 [474/766] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:11.437 [475/766] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:11.437 [476/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:11.437 [477/766] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:11.437 [478/766] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:11.696 [479/766] Linking static target lib/librte_table.a 00:03:11.954 [480/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:11.954 [481/766] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:12.219 [482/766] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.219 [483/766] Linking target lib/librte_table.so.25.0 00:03:12.481 [484/766] Generating symbol file lib/librte_table.so.25.0.p/librte_table.so.25.0.symbols 00:03:12.481 [485/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:12.481 [486/766] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:12.740 [487/766] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:12.740 [488/766] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:12.999 [489/766] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:12.999 [490/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:12.999 [491/766] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:12.999 [492/766] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:13.257 [493/766] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:13.257 [494/766] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:13.515 [495/766] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:13.773 [496/766] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:13.773 [497/766] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:13.773 [498/766] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:13.773 [499/766] Linking static target lib/librte_graph.a 00:03:13.773 [500/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:13.773 [501/766] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:14.339 [502/766] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:14.339 [503/766] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.339 [504/766] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:14.339 [505/766] Linking target lib/librte_graph.so.25.0 00:03:14.598 [506/766] Generating symbol file lib/librte_graph.so.25.0.p/librte_graph.so.25.0.symbols 00:03:14.598 [507/766] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:14.598 [508/766] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:14.857 [509/766] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:14.857 [510/766] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:14.857 [511/766] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:14.857 [512/766] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:15.116 [513/766] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:15.116 [514/766] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:15.116 [515/766] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:15.377 [516/766] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:15.377 [517/766] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:15.377 [518/766] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:15.377 [519/766] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:15.635 [520/766] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:15.635 [521/766] Linking static target lib/librte_node.a 00:03:15.635 [522/766] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:15.893 [523/766] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:15.893 [524/766] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:15.893 [525/766] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:15.894 [526/766] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.894 [527/766] Linking target lib/librte_node.so.25.0 00:03:16.153 [528/766] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:16.153 [529/766] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:16.153 [530/766] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:16.153 [531/766] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:16.153 [532/766] Linking static target drivers/librte_bus_pci.a 00:03:16.153 [533/766] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:16.153 [534/766] Compiling C object drivers/librte_bus_pci.so.25.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:16.153 [535/766] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:16.153 [536/766] Linking static target drivers/librte_bus_vdev.a 00:03:16.412 [537/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:16.412 [538/766] Compiling C object drivers/librte_bus_vdev.so.25.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:16.412 [539/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:16.671 [540/766] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.671 [541/766] Linking target drivers/librte_bus_vdev.so.25.0 00:03:16.671 [542/766] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:16.671 [543/766] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:16.671 [544/766] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.671 [545/766] Generating symbol file drivers/librte_bus_vdev.so.25.0.p/librte_bus_vdev.so.25.0.symbols 00:03:16.671 [546/766] Linking target drivers/librte_bus_pci.so.25.0 00:03:16.930 [547/766] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:16.930 [548/766] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:16.930 [549/766] Linking static target drivers/librte_mempool_ring.a 00:03:16.930 [550/766] Compiling C object drivers/librte_mempool_ring.so.25.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:16.930 [551/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:16.930 [552/766] Generating symbol file drivers/librte_bus_pci.so.25.0.p/librte_bus_pci.so.25.0.symbols 00:03:16.930 [553/766] Linking target drivers/librte_mempool_ring.so.25.0 00:03:17.189 [554/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:17.189 [555/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:17.755 [556/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:18.014 [557/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:18.014 [558/766] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:18.580 [559/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:18.839 [560/766] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:18.839 [561/766] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:18.839 [562/766] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:18.839 [563/766] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:19.406 [564/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:19.406 [565/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:19.406 [566/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:19.406 [567/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:19.665 [568/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:19.923 [569/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:19.923 [570/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:20.181 [571/766] Compiling C object drivers/libtmp_rte_power_acpi.a.p/power_acpi_acpi_cpufreq.c.o 00:03:20.181 [572/766] Linking static target drivers/libtmp_rte_power_acpi.a 00:03:20.181 [573/766] Compiling C object drivers/libtmp_rte_power_amd_pstate.a.p/power_amd_pstate_amd_pstate_cpufreq.c.o 00:03:20.181 [574/766] Linking static target drivers/libtmp_rte_power_amd_pstate.a 00:03:20.439 [575/766] Generating drivers/rte_power_acpi.pmd.c with a custom command 00:03:20.439 [576/766] Compiling C object drivers/librte_power_acpi.a.p/meson-generated_.._rte_power_acpi.pmd.c.o 00:03:20.439 [577/766] Linking static target drivers/librte_power_acpi.a 00:03:20.439 [578/766] Compiling C object drivers/librte_power_acpi.so.25.0.p/meson-generated_.._rte_power_acpi.pmd.c.o 00:03:20.439 [579/766] Linking target drivers/librte_power_acpi.so.25.0 00:03:20.439 [580/766] Generating drivers/rte_power_amd_pstate.pmd.c with a custom command 00:03:20.439 [581/766] Compiling C object drivers/libtmp_rte_power_cppc.a.p/power_cppc_cppc_cpufreq.c.o 00:03:20.439 [582/766] Compiling C object drivers/librte_power_amd_pstate.a.p/meson-generated_.._rte_power_amd_pstate.pmd.c.o 00:03:20.439 [583/766] Compiling C object drivers/librte_power_amd_pstate.so.25.0.p/meson-generated_.._rte_power_amd_pstate.pmd.c.o 00:03:20.439 [584/766] Linking static target drivers/librte_power_amd_pstate.a 00:03:20.439 [585/766] Linking static target drivers/libtmp_rte_power_cppc.a 00:03:20.439 [586/766] Linking target drivers/librte_power_amd_pstate.so.25.0 00:03:20.697 [587/766] Compiling C object drivers/libtmp_rte_power_kvm_vm.a.p/power_kvm_vm_guest_channel.c.o 00:03:20.697 [588/766] Generating drivers/rte_power_cppc.pmd.c with a custom command 00:03:20.697 [589/766] Compiling C object drivers/librte_power_cppc.a.p/meson-generated_.._rte_power_cppc.pmd.c.o 00:03:20.697 [590/766] Linking static target drivers/librte_power_cppc.a 00:03:20.697 [591/766] Compiling C object drivers/librte_power_cppc.so.25.0.p/meson-generated_.._rte_power_cppc.pmd.c.o 00:03:20.697 [592/766] Compiling C object drivers/libtmp_rte_power_intel_pstate.a.p/power_intel_pstate_intel_pstate_cpufreq.c.o 00:03:20.697 [593/766] Linking static target drivers/libtmp_rte_power_intel_pstate.a 00:03:20.697 [594/766] Compiling C object drivers/libtmp_rte_power_kvm_vm.a.p/power_kvm_vm_kvm_vm.c.o 00:03:20.697 [595/766] Linking static target drivers/libtmp_rte_power_kvm_vm.a 00:03:20.697 [596/766] Linking target drivers/librte_power_cppc.so.25.0 00:03:20.955 [597/766] Generating drivers/rte_power_intel_pstate.pmd.c with a custom command 00:03:20.955 [598/766] Generating drivers/rte_power_kvm_vm.pmd.c with a custom command 00:03:20.955 [599/766] Compiling C object drivers/librte_power_intel_pstate.a.p/meson-generated_.._rte_power_intel_pstate.pmd.c.o 00:03:20.955 [600/766] Linking static target drivers/librte_power_intel_pstate.a 00:03:20.955 [601/766] Compiling C object drivers/librte_power_intel_pstate.so.25.0.p/meson-generated_.._rte_power_intel_pstate.pmd.c.o 00:03:20.955 [602/766] Compiling C object drivers/librte_power_kvm_vm.a.p/meson-generated_.._rte_power_kvm_vm.pmd.c.o 00:03:20.955 [603/766] Linking static target drivers/librte_power_kvm_vm.a 00:03:20.955 [604/766] Compiling C object drivers/libtmp_rte_power_intel_uncore.a.p/power_intel_uncore_intel_uncore.c.o 00:03:20.955 [605/766] Linking static target drivers/libtmp_rte_power_intel_uncore.a 00:03:20.955 [606/766] Linking target drivers/librte_power_intel_pstate.so.25.0 00:03:21.214 [607/766] Compiling C object drivers/librte_power_kvm_vm.so.25.0.p/meson-generated_.._rte_power_kvm_vm.pmd.c.o 00:03:21.214 [608/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:21.214 [609/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:21.214 [610/766] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:03:21.214 [611/766] Generating drivers/rte_power_kvm_vm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.214 [612/766] Generating drivers/rte_power_intel_uncore.pmd.c with a custom command 00:03:21.214 [613/766] Compiling C object drivers/librte_power_intel_uncore.a.p/meson-generated_.._rte_power_intel_uncore.pmd.c.o 00:03:21.214 [614/766] Linking static target drivers/librte_power_intel_uncore.a 00:03:21.214 [615/766] Linking target drivers/librte_power_kvm_vm.so.25.0 00:03:21.214 [616/766] Compiling C object drivers/librte_power_intel_uncore.so.25.0.p/meson-generated_.._rte_power_intel_uncore.pmd.c.o 00:03:21.472 [617/766] Linking target drivers/librte_power_intel_uncore.so.25.0 00:03:21.472 [618/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:21.730 [619/766] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:21.730 [620/766] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:21.730 [621/766] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:21.730 [622/766] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:22.296 [623/766] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:22.296 [624/766] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:22.296 [625/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:22.296 [626/766] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:22.296 [627/766] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:22.296 [628/766] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:22.296 [629/766] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:03:22.554 [630/766] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:22.554 [631/766] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:22.554 [632/766] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:22.554 [633/766] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:22.554 [634/766] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:22.812 [635/766] Compiling C object drivers/librte_net_i40e.so.25.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:22.812 [636/766] Linking static target drivers/librte_net_i40e.a 00:03:22.812 [637/766] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:22.812 [638/766] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:23.071 [639/766] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:23.071 [640/766] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:23.071 [641/766] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:23.071 [642/766] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:23.071 [643/766] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:23.328 [644/766] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:23.328 [645/766] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.586 [646/766] Linking target drivers/librte_net_i40e.so.25.0 00:03:23.586 [647/766] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:23.845 [648/766] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:24.103 [649/766] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:24.103 [650/766] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:24.103 [651/766] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:24.361 [652/766] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:24.361 [653/766] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:24.618 [654/766] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:24.618 [655/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:24.875 [656/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:24.875 [657/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:25.133 [658/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:25.133 [659/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:25.133 [660/766] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:25.133 [661/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:25.133 [662/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:25.133 [663/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:25.390 [664/766] Linking static target lib/librte_vhost.a 00:03:25.390 [665/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:25.648 [666/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:25.648 [667/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:25.648 [668/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:25.908 [669/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:25.908 [670/766] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:25.908 [671/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:26.477 [672/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:26.477 [673/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:26.477 [674/766] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.477 [675/766] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:26.477 [676/766] Linking target lib/librte_vhost.so.25.0 00:03:26.477 [677/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:27.413 [678/766] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:27.413 [679/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:27.413 [680/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:27.413 [681/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:27.672 [682/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:27.672 [683/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:27.672 [684/766] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:27.672 [685/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:27.931 [686/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:27.931 [687/766] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:27.931 [688/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:27.931 [689/766] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:28.217 [690/766] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:28.217 [691/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:28.476 [692/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:28.476 [693/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:28.476 [694/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:28.476 [695/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:28.735 [696/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:28.735 [697/766] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:28.735 [698/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:28.994 [699/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:28.994 [700/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:28.994 [701/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:29.252 [702/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:29.252 [703/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:29.252 [704/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:29.511 [705/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:29.511 [706/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:29.511 [707/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:29.511 [708/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:29.511 [709/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:29.771 [710/766] Linking static target lib/librte_pipeline.a 00:03:30.031 [711/766] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:30.031 [712/766] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:30.031 [713/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:30.290 [714/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:30.290 [715/766] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:30.290 [716/766] Linking target app/dpdk-dumpcap 00:03:30.549 [717/766] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:30.549 [718/766] Linking target app/dpdk-graph 00:03:30.549 [719/766] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:30.549 [720/766] Linking target app/dpdk-pdump 00:03:30.549 [721/766] Linking target app/dpdk-proc-info 00:03:30.808 [722/766] Linking target app/dpdk-test-acl 00:03:30.808 [723/766] Linking target app/dpdk-test-bbdev 00:03:31.070 [724/766] Linking target app/dpdk-test-cmdline 00:03:31.070 [725/766] Linking target app/dpdk-test-compress-perf 00:03:31.070 [726/766] Linking target app/dpdk-test-crypto-perf 00:03:31.070 [727/766] Linking target app/dpdk-test-dma-perf 00:03:31.070 [728/766] Linking target app/dpdk-test-eventdev 00:03:31.330 [729/766] Linking target app/dpdk-test-fib 00:03:31.589 [730/766] Linking target app/dpdk-test-flow-perf 00:03:31.589 [731/766] Linking target app/dpdk-test-gpudev 00:03:31.589 [732/766] Linking target app/dpdk-test-pipeline 00:03:31.589 [733/766] Linking target app/dpdk-test-mldev 00:03:31.589 [734/766] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:31.850 [735/766] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:32.109 [736/766] Compiling C object app/dpdk-testpmd.p/test-pmd_hairpin.c.o 00:03:32.109 [737/766] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:32.109 [738/766] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:32.374 [739/766] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:32.374 [740/766] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:32.651 [741/766] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:32.651 [742/766] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:32.651 [743/766] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.910 [744/766] Linking target lib/librte_pipeline.so.25.0 00:03:32.910 [745/766] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:32.910 [746/766] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:33.168 [747/766] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:33.169 [748/766] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:33.427 [749/766] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:33.685 [750/766] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:33.944 [751/766] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:33.944 [752/766] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:34.202 [753/766] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:34.202 [754/766] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:34.202 [755/766] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:34.461 [756/766] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:34.461 [757/766] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:34.719 [758/766] Linking target app/dpdk-test-regex 00:03:34.719 [759/766] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:03:34.719 [760/766] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:34.719 [761/766] Linking target app/dpdk-test-sad 00:03:34.978 [762/766] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:35.237 [763/766] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:35.495 [764/766] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:35.495 [765/766] Linking target app/dpdk-test-security-perf 00:03:36.063 [766/766] Linking target app/dpdk-testpmd 00:03:36.063 03:12:23 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:03:36.063 03:12:23 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:36.063 03:12:23 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:36.063 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:36.063 [0/1] Installing files. 00:03:36.322 Installing subdir /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:36.322 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/counters.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:36.322 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/cpu.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:36.322 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/memory.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:36.322 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:36.322 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:36.322 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:36.322 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:36.322 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:36.322 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:36.322 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:36.322 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:36.322 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:36.322 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:36.322 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:36.322 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:36.322 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:36.322 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:36.322 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:36.322 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:36.322 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:36.322 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:36.322 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:36.322 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:36.322 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:36.322 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:36.322 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:36.322 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:36.322 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:36.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:36.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_eddsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_skeleton.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_gre.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_gre.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_ipv4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_ipv4.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_mpls.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_mpls.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:36.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.585 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.585 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.585 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.585 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.585 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.585 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.585 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.585 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.585 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.585 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.585 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.585 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.585 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.585 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.585 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.585 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.585 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.585 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.585 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.585 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.585 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.585 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.585 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.585 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.585 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.586 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.587 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:36.588 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:36.589 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:36.589 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.589 Installing lib/librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.589 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.589 Installing lib/librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.589 Installing lib/librte_argparse.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.589 Installing lib/librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.589 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.589 Installing lib/librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.589 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.589 Installing lib/librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.589 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.589 Installing lib/librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.589 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.589 Installing lib/librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.589 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.589 Installing lib/librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.589 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.589 Installing lib/librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.589 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.589 Installing lib/librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.589 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.589 Installing lib/librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.589 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.589 Installing lib/librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.589 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.590 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.160 Installing lib/librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.160 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.160 Installing lib/librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.160 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.160 Installing lib/librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing lib/librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing lib/librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing lib/librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing lib/librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing lib/librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing lib/librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing lib/librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing lib/librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing lib/librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing lib/librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing lib/librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing lib/librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing lib/librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing lib/librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing drivers/librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:37.161 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing drivers/librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:37.161 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing drivers/librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:37.161 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing drivers/librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:37.161 Installing drivers/librte_power_acpi.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing drivers/librte_power_acpi.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:37.161 Installing drivers/librte_power_amd_pstate.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing drivers/librte_power_amd_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:37.161 Installing drivers/librte_power_cppc.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing drivers/librte_power_cppc.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:37.161 Installing drivers/librte_power_intel_pstate.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing drivers/librte_power_intel_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:37.161 Installing drivers/librte_power_intel_uncore.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing drivers/librte_power_intel_uncore.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:37.161 Installing drivers/librte_power_kvm_vm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.161 Installing drivers/librte_power_kvm_vm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:37.161 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.161 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.161 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.161 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.161 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.161 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.161 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.161 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.161 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.161 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.161 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.161 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.161 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.161 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.161 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.161 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.161 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.161 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.161 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.161 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/argparse/rte_argparse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitset.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore_var.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ptr_compress/rte_ptr_compress.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_cksum.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip4.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.162 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/power/power_cpufreq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/power/power_uncore_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_cpufreq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_qos.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.163 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/drivers/power/kvm_vm/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry-exporter.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:37.164 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:37.164 Installing symlink pointing to librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.25 00:03:37.164 Installing symlink pointing to librte_log.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:37.164 Installing symlink pointing to librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.25 00:03:37.164 Installing symlink pointing to librte_kvargs.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:37.164 Installing symlink pointing to librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so.25 00:03:37.164 Installing symlink pointing to librte_argparse.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so 00:03:37.164 Installing symlink pointing to librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.25 00:03:37.164 Installing symlink pointing to librte_telemetry.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:37.164 Installing symlink pointing to librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.25 00:03:37.164 Installing symlink pointing to librte_eal.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:37.164 Installing symlink pointing to librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.25 00:03:37.164 Installing symlink pointing to librte_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:37.164 Installing symlink pointing to librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.25 00:03:37.164 Installing symlink pointing to librte_rcu.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:37.164 Installing symlink pointing to librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.25 00:03:37.164 Installing symlink pointing to librte_mempool.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:37.164 Installing symlink pointing to librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.25 00:03:37.164 Installing symlink pointing to librte_mbuf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:37.164 Installing symlink pointing to librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.25 00:03:37.164 Installing symlink pointing to librte_net.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:37.164 Installing symlink pointing to librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.25 00:03:37.164 Installing symlink pointing to librte_meter.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:37.164 Installing symlink pointing to librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.25 00:03:37.164 Installing symlink pointing to librte_ethdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:37.164 Installing symlink pointing to librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.25 00:03:37.164 Installing symlink pointing to librte_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:37.164 Installing symlink pointing to librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.25 00:03:37.164 Installing symlink pointing to librte_cmdline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:37.164 Installing symlink pointing to librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.25 00:03:37.164 Installing symlink pointing to librte_metrics.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:37.164 Installing symlink pointing to librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.25 00:03:37.164 Installing symlink pointing to librte_hash.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:37.164 Installing symlink pointing to librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.25 00:03:37.164 Installing symlink pointing to librte_timer.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:37.165 Installing symlink pointing to librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.25 00:03:37.165 Installing symlink pointing to librte_acl.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:37.165 Installing symlink pointing to librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.25 00:03:37.165 Installing symlink pointing to librte_bbdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:37.165 Installing symlink pointing to librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.25 00:03:37.165 Installing symlink pointing to librte_bitratestats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:37.165 Installing symlink pointing to librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.25 00:03:37.165 Installing symlink pointing to librte_bpf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:37.165 Installing symlink pointing to librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.25 00:03:37.165 Installing symlink pointing to librte_cfgfile.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:37.165 Installing symlink pointing to librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.25 00:03:37.165 Installing symlink pointing to librte_compressdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:37.165 Installing symlink pointing to librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.25 00:03:37.165 Installing symlink pointing to librte_cryptodev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:37.165 Installing symlink pointing to librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.25 00:03:37.165 Installing symlink pointing to librte_distributor.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:37.165 Installing symlink pointing to librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.25 00:03:37.165 Installing symlink pointing to librte_dmadev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:37.165 Installing symlink pointing to librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.25 00:03:37.165 Installing symlink pointing to librte_efd.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:37.165 Installing symlink pointing to librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.25 00:03:37.165 Installing symlink pointing to librte_eventdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:37.165 Installing symlink pointing to librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.25 00:03:37.165 Installing symlink pointing to librte_dispatcher.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:37.165 Installing symlink pointing to librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.25 00:03:37.165 Installing symlink pointing to librte_gpudev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:37.165 Installing symlink pointing to librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.25 00:03:37.165 Installing symlink pointing to librte_gro.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:37.165 Installing symlink pointing to librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.25 00:03:37.165 Installing symlink pointing to librte_gso.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:37.165 Installing symlink pointing to librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.25 00:03:37.165 Installing symlink pointing to librte_ip_frag.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:37.165 Installing symlink pointing to librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.25 00:03:37.165 Installing symlink pointing to librte_jobstats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:37.165 Installing symlink pointing to librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.25 00:03:37.165 Installing symlink pointing to librte_latencystats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:37.165 Installing symlink pointing to librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.25 00:03:37.165 Installing symlink pointing to librte_lpm.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:37.165 Installing symlink pointing to librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.25 00:03:37.165 Installing symlink pointing to librte_member.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:37.165 Installing symlink pointing to librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.25 00:03:37.165 Installing symlink pointing to librte_pcapng.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:37.165 Installing symlink pointing to librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.25 00:03:37.165 Installing symlink pointing to librte_power.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:37.165 Installing symlink pointing to librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.25 00:03:37.165 Installing symlink pointing to librte_rawdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:37.165 Installing symlink pointing to librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.25 00:03:37.165 Installing symlink pointing to librte_regexdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:37.165 Installing symlink pointing to librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.25 00:03:37.165 Installing symlink pointing to librte_mldev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:37.165 Installing symlink pointing to librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.25 00:03:37.165 Installing symlink pointing to librte_rib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:37.165 Installing symlink pointing to librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.25 00:03:37.165 Installing symlink pointing to librte_reorder.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:37.165 Installing symlink pointing to librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.25 00:03:37.165 Installing symlink pointing to librte_sched.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:37.165 Installing symlink pointing to librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.25 00:03:37.165 Installing symlink pointing to librte_security.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:37.165 Installing symlink pointing to librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.25 00:03:37.165 Installing symlink pointing to librte_stack.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:37.165 Installing symlink pointing to librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.25 00:03:37.165 Installing symlink pointing to librte_vhost.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:37.165 Installing symlink pointing to librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.25 00:03:37.165 Installing symlink pointing to librte_ipsec.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:37.165 Installing symlink pointing to librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.25 00:03:37.165 Installing symlink pointing to librte_pdcp.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:37.165 Installing symlink pointing to librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.25 00:03:37.165 Installing symlink pointing to librte_fib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:37.165 Installing symlink pointing to librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.25 00:03:37.165 Installing symlink pointing to librte_port.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:37.165 Installing symlink pointing to librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.25 00:03:37.165 Installing symlink pointing to librte_pdump.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:37.165 Installing symlink pointing to librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.25 00:03:37.165 Installing symlink pointing to librte_table.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:37.165 Installing symlink pointing to librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.25 00:03:37.165 Installing symlink pointing to librte_pipeline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:37.165 Installing symlink pointing to librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.25 00:03:37.165 Installing symlink pointing to librte_graph.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:37.165 Installing symlink pointing to librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.25 00:03:37.165 Installing symlink pointing to librte_node.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:37.165 Installing symlink pointing to librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25 00:03:37.165 Installing symlink pointing to librte_bus_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:03:37.165 Installing symlink pointing to librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25 00:03:37.165 Installing symlink pointing to librte_bus_vdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:03:37.165 Installing symlink pointing to librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25 00:03:37.165 './librte_bus_pci.so' -> 'dpdk/pmds-25.0/librte_bus_pci.so' 00:03:37.165 './librte_bus_pci.so.25' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25' 00:03:37.165 './librte_bus_pci.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25.0' 00:03:37.165 './librte_bus_vdev.so' -> 'dpdk/pmds-25.0/librte_bus_vdev.so' 00:03:37.165 './librte_bus_vdev.so.25' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25' 00:03:37.165 './librte_bus_vdev.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25.0' 00:03:37.165 './librte_mempool_ring.so' -> 'dpdk/pmds-25.0/librte_mempool_ring.so' 00:03:37.166 './librte_mempool_ring.so.25' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25' 00:03:37.166 './librte_mempool_ring.so.25.0' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25.0' 00:03:37.166 './librte_net_i40e.so' -> 'dpdk/pmds-25.0/librte_net_i40e.so' 00:03:37.166 './librte_net_i40e.so.25' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25' 00:03:37.166 './librte_net_i40e.so.25.0' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25.0' 00:03:37.166 './librte_power_acpi.so' -> 'dpdk/pmds-25.0/librte_power_acpi.so' 00:03:37.166 './librte_power_acpi.so.25' -> 'dpdk/pmds-25.0/librte_power_acpi.so.25' 00:03:37.166 './librte_power_acpi.so.25.0' -> 'dpdk/pmds-25.0/librte_power_acpi.so.25.0' 00:03:37.166 './librte_power_amd_pstate.so' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so' 00:03:37.166 './librte_power_amd_pstate.so.25' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so.25' 00:03:37.166 './librte_power_amd_pstate.so.25.0' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so.25.0' 00:03:37.166 './librte_power_cppc.so' -> 'dpdk/pmds-25.0/librte_power_cppc.so' 00:03:37.166 './librte_power_cppc.so.25' -> 'dpdk/pmds-25.0/librte_power_cppc.so.25' 00:03:37.166 './librte_power_cppc.so.25.0' -> 'dpdk/pmds-25.0/librte_power_cppc.so.25.0' 00:03:37.166 './librte_power_intel_pstate.so' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so' 00:03:37.166 './librte_power_intel_pstate.so.25' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so.25' 00:03:37.166 './librte_power_intel_pstate.so.25.0' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so.25.0' 00:03:37.166 './librte_power_intel_uncore.so' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so' 00:03:37.166 './librte_power_intel_uncore.so.25' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so.25' 00:03:37.166 './librte_power_intel_uncore.so.25.0' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so.25.0' 00:03:37.166 './librte_power_kvm_vm.so' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so' 00:03:37.166 './librte_power_kvm_vm.so.25' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so.25' 00:03:37.166 './librte_power_kvm_vm.so.25.0' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so.25.0' 00:03:37.166 Installing symlink pointing to librte_mempool_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:03:37.166 Installing symlink pointing to librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25 00:03:37.166 Installing symlink pointing to librte_net_i40e.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:03:37.166 Installing symlink pointing to librte_power_acpi.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so.25 00:03:37.166 Installing symlink pointing to librte_power_acpi.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so 00:03:37.166 Installing symlink pointing to librte_power_amd_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so.25 00:03:37.166 Installing symlink pointing to librte_power_amd_pstate.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so 00:03:37.166 Installing symlink pointing to librte_power_cppc.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so.25 00:03:37.166 Installing symlink pointing to librte_power_cppc.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so 00:03:37.166 Installing symlink pointing to librte_power_intel_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so.25 00:03:37.166 Installing symlink pointing to librte_power_intel_pstate.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so 00:03:37.166 Installing symlink pointing to librte_power_intel_uncore.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so.25 00:03:37.166 Installing symlink pointing to librte_power_intel_uncore.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so 00:03:37.166 Installing symlink pointing to librte_power_kvm_vm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so.25 00:03:37.166 Installing symlink pointing to librte_power_kvm_vm.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so 00:03:37.166 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-25.0' 00:03:37.166 03:12:24 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:03:37.166 03:12:24 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:37.166 00:03:37.166 real 1m6.631s 00:03:37.166 user 7m58.550s 00:03:37.166 sys 1m19.087s 00:03:37.166 03:12:24 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:37.166 03:12:24 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:37.166 ************************************ 00:03:37.166 END TEST build_native_dpdk 00:03:37.166 ************************************ 00:03:37.425 03:12:24 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:37.425 03:12:24 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:37.425 03:12:24 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:37.425 03:12:24 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:37.425 03:12:24 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:37.425 03:12:24 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:37.425 03:12:24 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:37.425 03:12:24 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:37.425 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:37.683 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.683 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:37.683 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:37.942 Using 'verbs' RDMA provider 00:03:51.547 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:06.422 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:06.422 Creating mk/config.mk...done. 00:04:06.422 Creating mk/cc.flags.mk...done. 00:04:06.422 Type 'make' to build. 00:04:06.422 03:12:53 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:06.422 03:12:53 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:06.422 03:12:53 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:06.422 03:12:53 -- common/autotest_common.sh@10 -- $ set +x 00:04:06.422 ************************************ 00:04:06.422 START TEST make 00:04:06.422 ************************************ 00:04:06.422 03:12:53 make -- common/autotest_common.sh@1129 -- $ make -j10 00:04:06.422 make[1]: Nothing to be done for 'all'. 00:05:02.648 CC lib/log/log.o 00:05:02.648 CC lib/log/log_flags.o 00:05:02.648 CC lib/log/log_deprecated.o 00:05:02.648 CC lib/ut/ut.o 00:05:02.648 CC lib/ut_mock/mock.o 00:05:02.648 LIB libspdk_log.a 00:05:02.648 LIB libspdk_ut.a 00:05:02.648 LIB libspdk_ut_mock.a 00:05:02.648 SO libspdk_ut_mock.so.6.0 00:05:02.648 SO libspdk_log.so.7.1 00:05:02.648 SO libspdk_ut.so.2.0 00:05:02.648 SYMLINK libspdk_ut_mock.so 00:05:02.648 SYMLINK libspdk_log.so 00:05:02.648 SYMLINK libspdk_ut.so 00:05:02.648 CC lib/ioat/ioat.o 00:05:02.648 CXX lib/trace_parser/trace.o 00:05:02.648 CC lib/util/base64.o 00:05:02.648 CC lib/util/bit_array.o 00:05:02.648 CC lib/util/cpuset.o 00:05:02.648 CC lib/util/crc16.o 00:05:02.648 CC lib/util/crc32c.o 00:05:02.648 CC lib/util/crc32.o 00:05:02.648 CC lib/dma/dma.o 00:05:02.648 CC lib/vfio_user/host/vfio_user_pci.o 00:05:02.648 CC lib/util/crc32_ieee.o 00:05:02.648 CC lib/util/crc64.o 00:05:02.648 CC lib/vfio_user/host/vfio_user.o 00:05:02.648 CC lib/util/dif.o 00:05:02.648 CC lib/util/fd_group.o 00:05:02.648 CC lib/util/fd.o 00:05:02.648 CC lib/util/file.o 00:05:02.648 LIB libspdk_dma.a 00:05:02.648 SO libspdk_dma.so.5.0 00:05:02.648 CC lib/util/hexlify.o 00:05:02.648 CC lib/util/iov.o 00:05:02.648 SYMLINK libspdk_dma.so 00:05:02.648 CC lib/util/math.o 00:05:02.648 LIB libspdk_ioat.a 00:05:02.648 SO libspdk_ioat.so.7.0 00:05:02.648 CC lib/util/net.o 00:05:02.648 CC lib/util/pipe.o 00:05:02.648 CC lib/util/strerror_tls.o 00:05:02.648 SYMLINK libspdk_ioat.so 00:05:02.648 CC lib/util/string.o 00:05:02.648 LIB libspdk_vfio_user.a 00:05:02.648 CC lib/util/uuid.o 00:05:02.648 CC lib/util/xor.o 00:05:02.648 SO libspdk_vfio_user.so.5.0 00:05:02.648 CC lib/util/zipf.o 00:05:02.648 CC lib/util/md5.o 00:05:02.648 SYMLINK libspdk_vfio_user.so 00:05:02.648 LIB libspdk_util.a 00:05:02.648 SO libspdk_util.so.10.1 00:05:02.648 SYMLINK libspdk_util.so 00:05:02.648 LIB libspdk_trace_parser.a 00:05:02.648 SO libspdk_trace_parser.so.6.0 00:05:02.648 SYMLINK libspdk_trace_parser.so 00:05:02.648 CC lib/conf/conf.o 00:05:02.648 CC lib/rdma_utils/rdma_utils.o 00:05:02.648 CC lib/env_dpdk/env.o 00:05:02.648 CC lib/env_dpdk/pci.o 00:05:02.648 CC lib/env_dpdk/init.o 00:05:02.648 CC lib/env_dpdk/threads.o 00:05:02.648 CC lib/env_dpdk/memory.o 00:05:02.648 CC lib/idxd/idxd.o 00:05:02.648 CC lib/json/json_parse.o 00:05:02.648 CC lib/vmd/vmd.o 00:05:02.648 CC lib/env_dpdk/pci_ioat.o 00:05:02.648 LIB libspdk_conf.a 00:05:02.648 SO libspdk_conf.so.6.0 00:05:02.648 CC lib/json/json_util.o 00:05:02.648 CC lib/json/json_write.o 00:05:02.648 LIB libspdk_rdma_utils.a 00:05:02.648 SYMLINK libspdk_conf.so 00:05:02.648 CC lib/idxd/idxd_user.o 00:05:02.648 SO libspdk_rdma_utils.so.1.0 00:05:02.648 SYMLINK libspdk_rdma_utils.so 00:05:02.648 CC lib/vmd/led.o 00:05:02.648 CC lib/env_dpdk/pci_virtio.o 00:05:02.648 CC lib/rdma_provider/common.o 00:05:02.648 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:02.648 CC lib/env_dpdk/pci_vmd.o 00:05:02.648 CC lib/env_dpdk/pci_idxd.o 00:05:02.648 LIB libspdk_json.a 00:05:02.648 SO libspdk_json.so.6.0 00:05:02.648 CC lib/idxd/idxd_kernel.o 00:05:02.648 CC lib/env_dpdk/pci_event.o 00:05:02.648 CC lib/env_dpdk/sigbus_handler.o 00:05:02.648 CC lib/env_dpdk/pci_dpdk.o 00:05:02.648 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:02.648 SYMLINK libspdk_json.so 00:05:02.648 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:02.648 LIB libspdk_vmd.a 00:05:02.648 SO libspdk_vmd.so.6.0 00:05:02.648 LIB libspdk_rdma_provider.a 00:05:02.648 SO libspdk_rdma_provider.so.7.0 00:05:02.648 LIB libspdk_idxd.a 00:05:02.648 SYMLINK libspdk_vmd.so 00:05:02.648 SO libspdk_idxd.so.12.1 00:05:02.648 SYMLINK libspdk_rdma_provider.so 00:05:02.648 SYMLINK libspdk_idxd.so 00:05:02.648 CC lib/jsonrpc/jsonrpc_server.o 00:05:02.648 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:02.648 CC lib/jsonrpc/jsonrpc_client.o 00:05:02.648 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:02.648 LIB libspdk_jsonrpc.a 00:05:02.648 SO libspdk_jsonrpc.so.6.0 00:05:02.648 SYMLINK libspdk_jsonrpc.so 00:05:02.648 CC lib/rpc/rpc.o 00:05:02.648 LIB libspdk_env_dpdk.a 00:05:02.648 SO libspdk_env_dpdk.so.15.1 00:05:02.648 LIB libspdk_rpc.a 00:05:02.648 SO libspdk_rpc.so.6.0 00:05:02.648 SYMLINK libspdk_env_dpdk.so 00:05:02.648 SYMLINK libspdk_rpc.so 00:05:02.648 CC lib/trace/trace.o 00:05:02.648 CC lib/trace/trace_flags.o 00:05:02.648 CC lib/keyring/keyring.o 00:05:02.648 CC lib/keyring/keyring_rpc.o 00:05:02.648 CC lib/trace/trace_rpc.o 00:05:02.648 CC lib/notify/notify.o 00:05:02.648 CC lib/notify/notify_rpc.o 00:05:02.648 LIB libspdk_notify.a 00:05:02.648 SO libspdk_notify.so.6.0 00:05:02.648 SYMLINK libspdk_notify.so 00:05:02.648 LIB libspdk_keyring.a 00:05:02.648 SO libspdk_keyring.so.2.0 00:05:02.648 LIB libspdk_trace.a 00:05:02.648 SO libspdk_trace.so.11.0 00:05:02.648 SYMLINK libspdk_keyring.so 00:05:02.648 SYMLINK libspdk_trace.so 00:05:03.225 CC lib/sock/sock_rpc.o 00:05:03.225 CC lib/sock/sock.o 00:05:03.225 CC lib/thread/thread.o 00:05:03.225 CC lib/thread/iobuf.o 00:05:03.484 LIB libspdk_sock.a 00:05:03.742 SO libspdk_sock.so.10.0 00:05:03.742 SYMLINK libspdk_sock.so 00:05:04.000 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:04.000 CC lib/nvme/nvme_fabric.o 00:05:04.000 CC lib/nvme/nvme_ns_cmd.o 00:05:04.000 CC lib/nvme/nvme_pcie_common.o 00:05:04.000 CC lib/nvme/nvme_ctrlr.o 00:05:04.000 CC lib/nvme/nvme_pcie.o 00:05:04.000 CC lib/nvme/nvme_qpair.o 00:05:04.000 CC lib/nvme/nvme_ns.o 00:05:04.000 CC lib/nvme/nvme.o 00:05:04.935 LIB libspdk_thread.a 00:05:05.193 SO libspdk_thread.so.11.0 00:05:05.193 CC lib/nvme/nvme_quirks.o 00:05:05.193 SYMLINK libspdk_thread.so 00:05:05.193 CC lib/nvme/nvme_transport.o 00:05:05.193 CC lib/nvme/nvme_discovery.o 00:05:05.452 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:05.452 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:05.711 CC lib/accel/accel.o 00:05:05.711 CC lib/blob/blobstore.o 00:05:05.711 CC lib/blob/request.o 00:05:05.711 CC lib/init/json_config.o 00:05:05.711 CC lib/init/subsystem.o 00:05:05.969 CC lib/init/subsystem_rpc.o 00:05:05.969 CC lib/init/rpc.o 00:05:05.969 CC lib/blob/zeroes.o 00:05:05.969 CC lib/accel/accel_rpc.o 00:05:05.969 CC lib/nvme/nvme_tcp.o 00:05:06.227 CC lib/virtio/virtio.o 00:05:06.227 CC lib/nvme/nvme_opal.o 00:05:06.227 CC lib/fsdev/fsdev.o 00:05:06.227 LIB libspdk_init.a 00:05:06.228 CC lib/fsdev/fsdev_io.o 00:05:06.228 SO libspdk_init.so.6.0 00:05:06.228 CC lib/fsdev/fsdev_rpc.o 00:05:06.228 SYMLINK libspdk_init.so 00:05:06.228 CC lib/blob/blob_bs_dev.o 00:05:06.486 CC lib/nvme/nvme_io_msg.o 00:05:06.486 CC lib/nvme/nvme_poll_group.o 00:05:06.745 CC lib/virtio/virtio_vhost_user.o 00:05:06.745 CC lib/nvme/nvme_zns.o 00:05:06.745 CC lib/nvme/nvme_stubs.o 00:05:07.003 CC lib/virtio/virtio_vfio_user.o 00:05:07.003 LIB libspdk_fsdev.a 00:05:07.003 SO libspdk_fsdev.so.2.0 00:05:07.003 CC lib/nvme/nvme_auth.o 00:05:07.261 CC lib/nvme/nvme_cuse.o 00:05:07.261 SYMLINK libspdk_fsdev.so 00:05:07.261 CC lib/virtio/virtio_pci.o 00:05:07.261 CC lib/event/app.o 00:05:07.261 CC lib/nvme/nvme_rdma.o 00:05:07.261 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:07.261 CC lib/accel/accel_sw.o 00:05:07.519 CC lib/event/reactor.o 00:05:07.519 LIB libspdk_virtio.a 00:05:07.519 SO libspdk_virtio.so.7.0 00:05:07.778 SYMLINK libspdk_virtio.so 00:05:07.778 CC lib/event/log_rpc.o 00:05:07.778 LIB libspdk_accel.a 00:05:07.778 SO libspdk_accel.so.16.0 00:05:07.778 CC lib/event/app_rpc.o 00:05:07.778 SYMLINK libspdk_accel.so 00:05:08.036 CC lib/event/scheduler_static.o 00:05:08.036 LIB libspdk_fuse_dispatcher.a 00:05:08.036 SO libspdk_fuse_dispatcher.so.1.0 00:05:08.036 CC lib/bdev/bdev.o 00:05:08.036 CC lib/bdev/bdev_rpc.o 00:05:08.036 CC lib/bdev/bdev_zone.o 00:05:08.036 SYMLINK libspdk_fuse_dispatcher.so 00:05:08.036 CC lib/bdev/part.o 00:05:08.294 CC lib/bdev/scsi_nvme.o 00:05:08.294 LIB libspdk_event.a 00:05:08.553 SO libspdk_event.so.14.0 00:05:08.553 SYMLINK libspdk_event.so 00:05:09.127 LIB libspdk_nvme.a 00:05:09.385 SO libspdk_nvme.so.15.0 00:05:09.952 SYMLINK libspdk_nvme.so 00:05:10.888 LIB libspdk_blob.a 00:05:10.888 SO libspdk_blob.so.11.0 00:05:10.888 SYMLINK libspdk_blob.so 00:05:11.148 CC lib/blobfs/tree.o 00:05:11.148 CC lib/blobfs/blobfs.o 00:05:11.407 CC lib/lvol/lvol.o 00:05:11.407 LIB libspdk_bdev.a 00:05:11.407 SO libspdk_bdev.so.17.0 00:05:11.665 SYMLINK libspdk_bdev.so 00:05:11.923 CC lib/nvmf/ctrlr.o 00:05:11.923 CC lib/nvmf/ctrlr_bdev.o 00:05:11.923 CC lib/nvmf/ctrlr_discovery.o 00:05:11.923 CC lib/nvmf/subsystem.o 00:05:11.923 CC lib/nbd/nbd.o 00:05:11.923 CC lib/ublk/ublk.o 00:05:11.923 CC lib/ftl/ftl_core.o 00:05:11.923 CC lib/scsi/dev.o 00:05:12.180 CC lib/scsi/lun.o 00:05:12.180 LIB libspdk_lvol.a 00:05:12.180 SO libspdk_lvol.so.10.0 00:05:12.439 CC lib/nbd/nbd_rpc.o 00:05:12.439 SYMLINK libspdk_lvol.so 00:05:12.439 CC lib/scsi/port.o 00:05:12.439 LIB libspdk_blobfs.a 00:05:12.439 CC lib/ftl/ftl_init.o 00:05:12.439 SO libspdk_blobfs.so.10.0 00:05:12.439 LIB libspdk_nbd.a 00:05:12.439 CC lib/ftl/ftl_layout.o 00:05:12.698 CC lib/scsi/scsi.o 00:05:12.698 SO libspdk_nbd.so.7.0 00:05:12.698 SYMLINK libspdk_blobfs.so 00:05:12.698 CC lib/ublk/ublk_rpc.o 00:05:12.698 SYMLINK libspdk_nbd.so 00:05:12.698 CC lib/nvmf/nvmf.o 00:05:12.698 CC lib/ftl/ftl_debug.o 00:05:12.698 CC lib/ftl/ftl_io.o 00:05:12.698 CC lib/scsi/scsi_bdev.o 00:05:12.698 CC lib/scsi/scsi_pr.o 00:05:12.958 CC lib/scsi/scsi_rpc.o 00:05:12.958 LIB libspdk_ublk.a 00:05:12.958 CC lib/ftl/ftl_sb.o 00:05:12.958 CC lib/ftl/ftl_l2p.o 00:05:12.958 SO libspdk_ublk.so.3.0 00:05:12.958 CC lib/nvmf/nvmf_rpc.o 00:05:12.958 CC lib/scsi/task.o 00:05:12.958 SYMLINK libspdk_ublk.so 00:05:12.958 CC lib/nvmf/transport.o 00:05:13.218 CC lib/ftl/ftl_l2p_flat.o 00:05:13.218 CC lib/nvmf/tcp.o 00:05:13.218 CC lib/nvmf/stubs.o 00:05:13.218 CC lib/nvmf/mdns_server.o 00:05:13.478 CC lib/ftl/ftl_nv_cache.o 00:05:13.478 LIB libspdk_scsi.a 00:05:13.478 SO libspdk_scsi.so.9.0 00:05:13.478 SYMLINK libspdk_scsi.so 00:05:13.478 CC lib/nvmf/rdma.o 00:05:13.478 CC lib/nvmf/auth.o 00:05:13.738 CC lib/ftl/ftl_band.o 00:05:13.998 CC lib/ftl/ftl_band_ops.o 00:05:13.998 CC lib/iscsi/conn.o 00:05:13.998 CC lib/vhost/vhost.o 00:05:13.998 CC lib/vhost/vhost_rpc.o 00:05:13.998 CC lib/vhost/vhost_scsi.o 00:05:14.258 CC lib/ftl/ftl_writer.o 00:05:14.258 CC lib/iscsi/init_grp.o 00:05:14.518 CC lib/ftl/ftl_rq.o 00:05:14.518 CC lib/iscsi/iscsi.o 00:05:14.518 CC lib/iscsi/param.o 00:05:14.518 CC lib/iscsi/portal_grp.o 00:05:14.518 CC lib/iscsi/tgt_node.o 00:05:14.518 CC lib/ftl/ftl_reloc.o 00:05:14.778 CC lib/vhost/vhost_blk.o 00:05:14.778 CC lib/iscsi/iscsi_subsystem.o 00:05:14.778 CC lib/iscsi/iscsi_rpc.o 00:05:15.038 CC lib/iscsi/task.o 00:05:15.038 CC lib/ftl/ftl_l2p_cache.o 00:05:15.038 CC lib/ftl/ftl_p2l.o 00:05:15.038 CC lib/vhost/rte_vhost_user.o 00:05:15.298 CC lib/ftl/ftl_p2l_log.o 00:05:15.298 CC lib/ftl/mngt/ftl_mngt.o 00:05:15.298 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:15.298 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:15.559 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:15.559 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:15.559 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:15.559 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:15.559 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:15.559 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:15.559 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:15.831 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:15.831 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:15.831 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:15.831 CC lib/ftl/utils/ftl_conf.o 00:05:15.831 CC lib/ftl/utils/ftl_md.o 00:05:15.831 CC lib/ftl/utils/ftl_mempool.o 00:05:15.831 CC lib/ftl/utils/ftl_bitmap.o 00:05:15.831 CC lib/ftl/utils/ftl_property.o 00:05:16.108 LIB libspdk_iscsi.a 00:05:16.108 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:16.108 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:16.108 SO libspdk_iscsi.so.8.0 00:05:16.108 LIB libspdk_vhost.a 00:05:16.108 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:16.108 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:16.108 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:16.108 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:16.108 SO libspdk_vhost.so.8.0 00:05:16.108 SYMLINK libspdk_iscsi.so 00:05:16.368 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:16.368 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:16.368 SYMLINK libspdk_vhost.so 00:05:16.368 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:16.368 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:16.368 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:16.368 LIB libspdk_nvmf.a 00:05:16.368 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:16.368 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:16.368 CC lib/ftl/base/ftl_base_dev.o 00:05:16.368 CC lib/ftl/base/ftl_base_bdev.o 00:05:16.368 CC lib/ftl/ftl_trace.o 00:05:16.368 SO libspdk_nvmf.so.20.0 00:05:16.628 LIB libspdk_ftl.a 00:05:16.628 SYMLINK libspdk_nvmf.so 00:05:16.888 SO libspdk_ftl.so.9.0 00:05:17.147 SYMLINK libspdk_ftl.so 00:05:17.715 CC module/env_dpdk/env_dpdk_rpc.o 00:05:17.715 CC module/accel/ioat/accel_ioat.o 00:05:17.715 CC module/accel/error/accel_error.o 00:05:17.715 CC module/accel/iaa/accel_iaa.o 00:05:17.715 CC module/sock/posix/posix.o 00:05:17.715 CC module/accel/dsa/accel_dsa.o 00:05:17.715 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:17.715 CC module/blob/bdev/blob_bdev.o 00:05:17.715 CC module/fsdev/aio/fsdev_aio.o 00:05:17.715 CC module/keyring/file/keyring.o 00:05:17.715 LIB libspdk_env_dpdk_rpc.a 00:05:17.715 SO libspdk_env_dpdk_rpc.so.6.0 00:05:17.975 CC module/keyring/file/keyring_rpc.o 00:05:17.975 SYMLINK libspdk_env_dpdk_rpc.so 00:05:17.975 CC module/accel/ioat/accel_ioat_rpc.o 00:05:17.975 LIB libspdk_scheduler_dynamic.a 00:05:17.975 SO libspdk_scheduler_dynamic.so.4.0 00:05:17.975 CC module/accel/iaa/accel_iaa_rpc.o 00:05:17.975 CC module/accel/error/accel_error_rpc.o 00:05:17.975 LIB libspdk_blob_bdev.a 00:05:17.975 SYMLINK libspdk_scheduler_dynamic.so 00:05:17.975 SO libspdk_blob_bdev.so.11.0 00:05:17.975 LIB libspdk_keyring_file.a 00:05:17.975 LIB libspdk_accel_ioat.a 00:05:17.975 CC module/accel/dsa/accel_dsa_rpc.o 00:05:17.975 CC module/keyring/linux/keyring.o 00:05:17.975 SO libspdk_keyring_file.so.2.0 00:05:17.975 SO libspdk_accel_ioat.so.6.0 00:05:17.975 LIB libspdk_accel_error.a 00:05:17.975 LIB libspdk_accel_iaa.a 00:05:17.975 SYMLINK libspdk_blob_bdev.so 00:05:18.235 SO libspdk_accel_error.so.2.0 00:05:18.235 SYMLINK libspdk_keyring_file.so 00:05:18.235 SO libspdk_accel_iaa.so.3.0 00:05:18.235 SYMLINK libspdk_accel_ioat.so 00:05:18.235 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:18.235 CC module/fsdev/aio/linux_aio_mgr.o 00:05:18.235 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:18.235 LIB libspdk_accel_dsa.a 00:05:18.235 SYMLINK libspdk_accel_iaa.so 00:05:18.235 SYMLINK libspdk_accel_error.so 00:05:18.235 CC module/keyring/linux/keyring_rpc.o 00:05:18.235 SO libspdk_accel_dsa.so.5.0 00:05:18.235 SYMLINK libspdk_accel_dsa.so 00:05:18.235 LIB libspdk_keyring_linux.a 00:05:18.494 LIB libspdk_scheduler_dpdk_governor.a 00:05:18.494 CC module/bdev/delay/vbdev_delay.o 00:05:18.494 CC module/bdev/error/vbdev_error.o 00:05:18.494 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:18.494 SO libspdk_keyring_linux.so.1.0 00:05:18.494 CC module/scheduler/gscheduler/gscheduler.o 00:05:18.494 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:18.494 CC module/bdev/error/vbdev_error_rpc.o 00:05:18.494 SYMLINK libspdk_keyring_linux.so 00:05:18.494 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:18.495 CC module/bdev/gpt/gpt.o 00:05:18.495 LIB libspdk_fsdev_aio.a 00:05:18.495 CC module/bdev/lvol/vbdev_lvol.o 00:05:18.495 SO libspdk_fsdev_aio.so.1.0 00:05:18.495 LIB libspdk_scheduler_gscheduler.a 00:05:18.495 CC module/blobfs/bdev/blobfs_bdev.o 00:05:18.495 LIB libspdk_sock_posix.a 00:05:18.755 SO libspdk_scheduler_gscheduler.so.4.0 00:05:18.755 SO libspdk_sock_posix.so.6.0 00:05:18.755 SYMLINK libspdk_fsdev_aio.so 00:05:18.755 CC module/bdev/gpt/vbdev_gpt.o 00:05:18.755 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:18.755 SYMLINK libspdk_scheduler_gscheduler.so 00:05:18.755 SYMLINK libspdk_sock_posix.so 00:05:18.755 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:18.755 LIB libspdk_bdev_error.a 00:05:18.755 SO libspdk_bdev_error.so.6.0 00:05:18.755 LIB libspdk_bdev_delay.a 00:05:18.755 CC module/bdev/malloc/bdev_malloc.o 00:05:18.755 SYMLINK libspdk_bdev_error.so 00:05:18.755 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:18.755 CC module/bdev/null/bdev_null.o 00:05:18.755 SO libspdk_bdev_delay.so.6.0 00:05:18.755 CC module/bdev/nvme/bdev_nvme.o 00:05:19.015 LIB libspdk_bdev_gpt.a 00:05:19.015 LIB libspdk_blobfs_bdev.a 00:05:19.015 SO libspdk_bdev_gpt.so.6.0 00:05:19.015 SYMLINK libspdk_bdev_delay.so 00:05:19.015 SO libspdk_blobfs_bdev.so.6.0 00:05:19.015 CC module/bdev/passthru/vbdev_passthru.o 00:05:19.015 SYMLINK libspdk_bdev_gpt.so 00:05:19.015 SYMLINK libspdk_blobfs_bdev.so 00:05:19.015 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:19.015 CC module/bdev/null/bdev_null_rpc.o 00:05:19.015 CC module/bdev/raid/bdev_raid.o 00:05:19.015 LIB libspdk_bdev_lvol.a 00:05:19.275 CC module/bdev/split/vbdev_split.o 00:05:19.275 SO libspdk_bdev_lvol.so.6.0 00:05:19.275 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:19.275 CC module/bdev/raid/bdev_raid_rpc.o 00:05:19.275 CC module/bdev/raid/bdev_raid_sb.o 00:05:19.275 LIB libspdk_bdev_passthru.a 00:05:19.275 SYMLINK libspdk_bdev_lvol.so 00:05:19.275 CC module/bdev/raid/raid0.o 00:05:19.275 SO libspdk_bdev_passthru.so.6.0 00:05:19.275 LIB libspdk_bdev_malloc.a 00:05:19.275 LIB libspdk_bdev_null.a 00:05:19.275 SYMLINK libspdk_bdev_passthru.so 00:05:19.275 SO libspdk_bdev_null.so.6.0 00:05:19.275 SO libspdk_bdev_malloc.so.6.0 00:05:19.275 CC module/bdev/raid/raid1.o 00:05:19.275 CC module/bdev/split/vbdev_split_rpc.o 00:05:19.275 SYMLINK libspdk_bdev_null.so 00:05:19.535 CC module/bdev/raid/concat.o 00:05:19.535 SYMLINK libspdk_bdev_malloc.so 00:05:19.535 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:19.535 CC module/bdev/raid/raid5f.o 00:05:19.535 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:19.535 CC module/bdev/nvme/nvme_rpc.o 00:05:19.535 LIB libspdk_bdev_split.a 00:05:19.536 LIB libspdk_bdev_zone_block.a 00:05:19.536 SO libspdk_bdev_split.so.6.0 00:05:19.536 CC module/bdev/nvme/bdev_mdns_client.o 00:05:19.536 SO libspdk_bdev_zone_block.so.6.0 00:05:19.795 CC module/bdev/nvme/vbdev_opal.o 00:05:19.795 SYMLINK libspdk_bdev_split.so 00:05:19.795 SYMLINK libspdk_bdev_zone_block.so 00:05:19.795 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:19.795 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:19.795 CC module/bdev/aio/bdev_aio.o 00:05:20.054 CC module/bdev/ftl/bdev_ftl.o 00:05:20.054 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:20.054 CC module/bdev/iscsi/bdev_iscsi.o 00:05:20.054 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:20.054 CC module/bdev/aio/bdev_aio_rpc.o 00:05:20.313 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:20.313 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:20.314 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:20.314 LIB libspdk_bdev_aio.a 00:05:20.314 SO libspdk_bdev_aio.so.6.0 00:05:20.314 LIB libspdk_bdev_ftl.a 00:05:20.314 SO libspdk_bdev_ftl.so.6.0 00:05:20.314 SYMLINK libspdk_bdev_aio.so 00:05:20.314 SYMLINK libspdk_bdev_ftl.so 00:05:20.314 LIB libspdk_bdev_iscsi.a 00:05:20.314 LIB libspdk_bdev_raid.a 00:05:20.573 SO libspdk_bdev_iscsi.so.6.0 00:05:20.573 SO libspdk_bdev_raid.so.6.0 00:05:20.573 SYMLINK libspdk_bdev_iscsi.so 00:05:20.573 SYMLINK libspdk_bdev_raid.so 00:05:20.833 LIB libspdk_bdev_virtio.a 00:05:20.833 SO libspdk_bdev_virtio.so.6.0 00:05:21.093 SYMLINK libspdk_bdev_virtio.so 00:05:22.473 LIB libspdk_bdev_nvme.a 00:05:22.473 SO libspdk_bdev_nvme.so.7.1 00:05:22.473 SYMLINK libspdk_bdev_nvme.so 00:05:23.043 CC module/event/subsystems/fsdev/fsdev.o 00:05:23.043 CC module/event/subsystems/vmd/vmd.o 00:05:23.043 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:23.043 CC module/event/subsystems/scheduler/scheduler.o 00:05:23.043 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:23.043 CC module/event/subsystems/sock/sock.o 00:05:23.043 CC module/event/subsystems/keyring/keyring.o 00:05:23.043 CC module/event/subsystems/iobuf/iobuf.o 00:05:23.043 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:23.043 LIB libspdk_event_vhost_blk.a 00:05:23.043 LIB libspdk_event_scheduler.a 00:05:23.043 LIB libspdk_event_keyring.a 00:05:23.043 LIB libspdk_event_fsdev.a 00:05:23.043 LIB libspdk_event_sock.a 00:05:23.043 LIB libspdk_event_iobuf.a 00:05:23.043 SO libspdk_event_vhost_blk.so.3.0 00:05:23.043 LIB libspdk_event_vmd.a 00:05:23.043 SO libspdk_event_keyring.so.1.0 00:05:23.043 SO libspdk_event_scheduler.so.4.0 00:05:23.043 SO libspdk_event_fsdev.so.1.0 00:05:23.043 SO libspdk_event_sock.so.5.0 00:05:23.043 SO libspdk_event_iobuf.so.3.0 00:05:23.303 SO libspdk_event_vmd.so.6.0 00:05:23.303 SYMLINK libspdk_event_vhost_blk.so 00:05:23.303 SYMLINK libspdk_event_fsdev.so 00:05:23.303 SYMLINK libspdk_event_keyring.so 00:05:23.303 SYMLINK libspdk_event_scheduler.so 00:05:23.303 SYMLINK libspdk_event_sock.so 00:05:23.303 SYMLINK libspdk_event_iobuf.so 00:05:23.303 SYMLINK libspdk_event_vmd.so 00:05:23.563 CC module/event/subsystems/accel/accel.o 00:05:23.823 LIB libspdk_event_accel.a 00:05:23.823 SO libspdk_event_accel.so.6.0 00:05:23.823 SYMLINK libspdk_event_accel.so 00:05:24.395 CC module/event/subsystems/bdev/bdev.o 00:05:24.395 LIB libspdk_event_bdev.a 00:05:24.395 SO libspdk_event_bdev.so.6.0 00:05:24.655 SYMLINK libspdk_event_bdev.so 00:05:24.916 CC module/event/subsystems/scsi/scsi.o 00:05:24.916 CC module/event/subsystems/nbd/nbd.o 00:05:24.916 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:24.916 CC module/event/subsystems/ublk/ublk.o 00:05:24.916 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:24.916 LIB libspdk_event_scsi.a 00:05:25.176 LIB libspdk_event_ublk.a 00:05:25.176 SO libspdk_event_scsi.so.6.0 00:05:25.176 LIB libspdk_event_nbd.a 00:05:25.176 SO libspdk_event_ublk.so.3.0 00:05:25.176 SO libspdk_event_nbd.so.6.0 00:05:25.176 SYMLINK libspdk_event_scsi.so 00:05:25.176 LIB libspdk_event_nvmf.a 00:05:25.176 SYMLINK libspdk_event_nbd.so 00:05:25.176 SYMLINK libspdk_event_ublk.so 00:05:25.176 SO libspdk_event_nvmf.so.6.0 00:05:25.436 SYMLINK libspdk_event_nvmf.so 00:05:25.436 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:25.436 CC module/event/subsystems/iscsi/iscsi.o 00:05:25.696 LIB libspdk_event_vhost_scsi.a 00:05:25.696 LIB libspdk_event_iscsi.a 00:05:25.696 SO libspdk_event_vhost_scsi.so.3.0 00:05:25.696 SO libspdk_event_iscsi.so.6.0 00:05:25.696 SYMLINK libspdk_event_vhost_scsi.so 00:05:25.696 SYMLINK libspdk_event_iscsi.so 00:05:25.956 SO libspdk.so.6.0 00:05:25.956 SYMLINK libspdk.so 00:05:26.215 CXX app/trace/trace.o 00:05:26.215 CC app/trace_record/trace_record.o 00:05:26.215 CC app/spdk_nvme_perf/perf.o 00:05:26.215 CC app/spdk_lspci/spdk_lspci.o 00:05:26.215 CC app/nvmf_tgt/nvmf_main.o 00:05:26.215 CC app/iscsi_tgt/iscsi_tgt.o 00:05:26.215 CC app/spdk_tgt/spdk_tgt.o 00:05:26.215 CC test/thread/poller_perf/poller_perf.o 00:05:26.475 CC examples/util/zipf/zipf.o 00:05:26.475 CC test/dma/test_dma/test_dma.o 00:05:26.475 LINK spdk_lspci 00:05:26.475 LINK nvmf_tgt 00:05:26.475 LINK iscsi_tgt 00:05:26.475 LINK zipf 00:05:26.475 LINK spdk_trace_record 00:05:26.475 LINK spdk_tgt 00:05:26.475 LINK poller_perf 00:05:26.734 LINK spdk_trace 00:05:26.734 CC app/spdk_nvme_identify/identify.o 00:05:26.993 TEST_HEADER include/spdk/accel.h 00:05:26.993 TEST_HEADER include/spdk/accel_module.h 00:05:26.993 TEST_HEADER include/spdk/assert.h 00:05:26.993 TEST_HEADER include/spdk/barrier.h 00:05:26.993 TEST_HEADER include/spdk/base64.h 00:05:26.993 TEST_HEADER include/spdk/bdev.h 00:05:26.993 TEST_HEADER include/spdk/bdev_module.h 00:05:26.993 TEST_HEADER include/spdk/bdev_zone.h 00:05:26.993 TEST_HEADER include/spdk/bit_array.h 00:05:26.993 TEST_HEADER include/spdk/bit_pool.h 00:05:26.993 TEST_HEADER include/spdk/blob_bdev.h 00:05:26.993 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:26.993 TEST_HEADER include/spdk/blobfs.h 00:05:26.993 TEST_HEADER include/spdk/blob.h 00:05:26.993 TEST_HEADER include/spdk/conf.h 00:05:26.993 CC examples/ioat/perf/perf.o 00:05:26.993 TEST_HEADER include/spdk/config.h 00:05:26.993 TEST_HEADER include/spdk/cpuset.h 00:05:26.993 TEST_HEADER include/spdk/crc16.h 00:05:26.993 TEST_HEADER include/spdk/crc32.h 00:05:26.993 TEST_HEADER include/spdk/crc64.h 00:05:26.993 TEST_HEADER include/spdk/dif.h 00:05:26.993 TEST_HEADER include/spdk/dma.h 00:05:26.993 TEST_HEADER include/spdk/endian.h 00:05:26.993 TEST_HEADER include/spdk/env_dpdk.h 00:05:26.993 TEST_HEADER include/spdk/env.h 00:05:26.993 TEST_HEADER include/spdk/event.h 00:05:26.993 TEST_HEADER include/spdk/fd_group.h 00:05:26.993 CC test/app/bdev_svc/bdev_svc.o 00:05:26.993 TEST_HEADER include/spdk/fd.h 00:05:26.993 TEST_HEADER include/spdk/file.h 00:05:26.993 TEST_HEADER include/spdk/fsdev.h 00:05:26.993 TEST_HEADER include/spdk/fsdev_module.h 00:05:26.993 TEST_HEADER include/spdk/ftl.h 00:05:26.993 CC examples/ioat/verify/verify.o 00:05:26.993 CC examples/vmd/lsvmd/lsvmd.o 00:05:26.993 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:26.993 CC examples/idxd/perf/perf.o 00:05:26.993 TEST_HEADER include/spdk/gpt_spec.h 00:05:26.993 TEST_HEADER include/spdk/hexlify.h 00:05:26.993 TEST_HEADER include/spdk/histogram_data.h 00:05:26.993 TEST_HEADER include/spdk/idxd.h 00:05:26.993 TEST_HEADER include/spdk/idxd_spec.h 00:05:26.993 TEST_HEADER include/spdk/init.h 00:05:26.993 TEST_HEADER include/spdk/ioat.h 00:05:26.993 LINK test_dma 00:05:26.993 TEST_HEADER include/spdk/ioat_spec.h 00:05:26.993 TEST_HEADER include/spdk/iscsi_spec.h 00:05:26.993 TEST_HEADER include/spdk/json.h 00:05:26.993 TEST_HEADER include/spdk/jsonrpc.h 00:05:26.993 TEST_HEADER include/spdk/keyring.h 00:05:26.993 TEST_HEADER include/spdk/keyring_module.h 00:05:26.993 TEST_HEADER include/spdk/likely.h 00:05:26.993 TEST_HEADER include/spdk/log.h 00:05:26.993 TEST_HEADER include/spdk/lvol.h 00:05:26.993 TEST_HEADER include/spdk/md5.h 00:05:26.993 TEST_HEADER include/spdk/memory.h 00:05:26.993 TEST_HEADER include/spdk/mmio.h 00:05:26.993 TEST_HEADER include/spdk/nbd.h 00:05:26.993 TEST_HEADER include/spdk/net.h 00:05:26.993 TEST_HEADER include/spdk/notify.h 00:05:26.993 TEST_HEADER include/spdk/nvme.h 00:05:26.993 TEST_HEADER include/spdk/nvme_intel.h 00:05:26.993 CC app/spdk_nvme_discover/discovery_aer.o 00:05:26.993 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:26.993 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:26.993 TEST_HEADER include/spdk/nvme_spec.h 00:05:26.993 TEST_HEADER include/spdk/nvme_zns.h 00:05:26.993 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:26.993 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:26.993 TEST_HEADER include/spdk/nvmf.h 00:05:26.993 TEST_HEADER include/spdk/nvmf_spec.h 00:05:26.993 TEST_HEADER include/spdk/nvmf_transport.h 00:05:26.993 TEST_HEADER include/spdk/opal.h 00:05:26.993 TEST_HEADER include/spdk/opal_spec.h 00:05:26.993 TEST_HEADER include/spdk/pci_ids.h 00:05:26.993 TEST_HEADER include/spdk/pipe.h 00:05:26.993 TEST_HEADER include/spdk/queue.h 00:05:26.993 TEST_HEADER include/spdk/reduce.h 00:05:26.993 TEST_HEADER include/spdk/rpc.h 00:05:26.993 TEST_HEADER include/spdk/scheduler.h 00:05:26.993 TEST_HEADER include/spdk/scsi.h 00:05:26.993 TEST_HEADER include/spdk/scsi_spec.h 00:05:26.993 TEST_HEADER include/spdk/sock.h 00:05:26.993 TEST_HEADER include/spdk/stdinc.h 00:05:26.993 TEST_HEADER include/spdk/string.h 00:05:26.993 TEST_HEADER include/spdk/thread.h 00:05:26.993 TEST_HEADER include/spdk/trace.h 00:05:26.993 TEST_HEADER include/spdk/trace_parser.h 00:05:26.993 TEST_HEADER include/spdk/tree.h 00:05:26.993 TEST_HEADER include/spdk/ublk.h 00:05:26.993 TEST_HEADER include/spdk/util.h 00:05:26.993 TEST_HEADER include/spdk/uuid.h 00:05:26.993 TEST_HEADER include/spdk/version.h 00:05:26.994 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:26.994 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:26.994 TEST_HEADER include/spdk/vhost.h 00:05:26.994 TEST_HEADER include/spdk/vmd.h 00:05:26.994 TEST_HEADER include/spdk/xor.h 00:05:26.994 TEST_HEADER include/spdk/zipf.h 00:05:26.994 CXX test/cpp_headers/accel.o 00:05:26.994 LINK lsvmd 00:05:27.252 LINK bdev_svc 00:05:27.252 LINK verify 00:05:27.252 LINK ioat_perf 00:05:27.252 LINK spdk_nvme_discover 00:05:27.252 CXX test/cpp_headers/accel_module.o 00:05:27.252 CXX test/cpp_headers/assert.o 00:05:27.252 LINK spdk_nvme_perf 00:05:27.252 LINK idxd_perf 00:05:27.510 CC examples/vmd/led/led.o 00:05:27.510 CC app/spdk_top/spdk_top.o 00:05:27.510 CXX test/cpp_headers/barrier.o 00:05:27.510 CC app/vhost/vhost.o 00:05:27.510 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:27.510 LINK led 00:05:27.510 CC app/spdk_dd/spdk_dd.o 00:05:27.510 CXX test/cpp_headers/base64.o 00:05:27.510 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:27.769 CC app/fio/nvme/fio_plugin.o 00:05:27.769 LINK vhost 00:05:27.769 CXX test/cpp_headers/bdev.o 00:05:27.769 CC test/app/histogram_perf/histogram_perf.o 00:05:27.769 CC test/env/mem_callbacks/mem_callbacks.o 00:05:27.769 LINK interrupt_tgt 00:05:27.769 LINK spdk_nvme_identify 00:05:28.029 LINK spdk_dd 00:05:28.029 CXX test/cpp_headers/bdev_module.o 00:05:28.029 LINK histogram_perf 00:05:28.029 LINK nvme_fuzz 00:05:28.029 CXX test/cpp_headers/bdev_zone.o 00:05:28.029 CXX test/cpp_headers/bit_array.o 00:05:28.029 CC app/fio/bdev/fio_plugin.o 00:05:28.288 CXX test/cpp_headers/bit_pool.o 00:05:28.288 CXX test/cpp_headers/blob_bdev.o 00:05:28.288 CC test/env/vtophys/vtophys.o 00:05:28.288 CXX test/cpp_headers/blobfs_bdev.o 00:05:28.288 LINK spdk_top 00:05:28.288 CC examples/thread/thread/thread_ex.o 00:05:28.288 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:28.288 LINK spdk_nvme 00:05:28.288 CXX test/cpp_headers/blobfs.o 00:05:28.288 CXX test/cpp_headers/blob.o 00:05:28.547 LINK mem_callbacks 00:05:28.547 CXX test/cpp_headers/conf.o 00:05:28.547 LINK vtophys 00:05:28.547 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:28.547 CC test/app/jsoncat/jsoncat.o 00:05:28.547 CXX test/cpp_headers/config.o 00:05:28.547 LINK thread 00:05:28.547 CXX test/cpp_headers/cpuset.o 00:05:28.806 CC test/app/stub/stub.o 00:05:28.806 CC test/env/memory/memory_ut.o 00:05:28.806 LINK spdk_bdev 00:05:28.806 LINK env_dpdk_post_init 00:05:28.806 CC test/event/event_perf/event_perf.o 00:05:28.806 LINK jsoncat 00:05:28.806 CC test/nvme/aer/aer.o 00:05:28.806 CXX test/cpp_headers/crc16.o 00:05:28.806 LINK stub 00:05:28.806 LINK event_perf 00:05:29.065 CC examples/sock/hello_world/hello_sock.o 00:05:29.065 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:29.065 CXX test/cpp_headers/crc32.o 00:05:29.065 CC test/rpc_client/rpc_client_test.o 00:05:29.065 CC test/env/pci/pci_ut.o 00:05:29.065 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:29.065 CXX test/cpp_headers/crc64.o 00:05:29.065 LINK aer 00:05:29.324 CC test/event/reactor/reactor.o 00:05:29.324 CXX test/cpp_headers/dif.o 00:05:29.324 LINK rpc_client_test 00:05:29.324 LINK hello_sock 00:05:29.324 LINK reactor 00:05:29.324 CC test/accel/dif/dif.o 00:05:29.324 CXX test/cpp_headers/dma.o 00:05:29.582 CC test/nvme/reset/reset.o 00:05:29.582 CXX test/cpp_headers/endian.o 00:05:29.582 LINK pci_ut 00:05:29.582 LINK vhost_fuzz 00:05:29.582 CXX test/cpp_headers/env_dpdk.o 00:05:29.582 CC test/event/reactor_perf/reactor_perf.o 00:05:29.582 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:29.841 CC test/nvme/sgl/sgl.o 00:05:29.841 LINK reset 00:05:29.841 CXX test/cpp_headers/env.o 00:05:29.841 LINK reactor_perf 00:05:29.841 CXX test/cpp_headers/event.o 00:05:30.100 CXX test/cpp_headers/fd_group.o 00:05:30.100 CC examples/accel/perf/accel_perf.o 00:05:30.100 LINK hello_fsdev 00:05:30.100 LINK dif 00:05:30.100 CXX test/cpp_headers/fd.o 00:05:30.100 CC test/event/app_repeat/app_repeat.o 00:05:30.100 LINK memory_ut 00:05:30.100 LINK sgl 00:05:30.100 CC test/blobfs/mkfs/mkfs.o 00:05:30.358 CXX test/cpp_headers/file.o 00:05:30.358 LINK app_repeat 00:05:30.358 CC test/nvme/e2edp/nvme_dp.o 00:05:30.358 CC examples/blob/hello_world/hello_blob.o 00:05:30.358 LINK mkfs 00:05:30.358 CC examples/blob/cli/blobcli.o 00:05:30.358 CXX test/cpp_headers/fsdev.o 00:05:30.616 LINK iscsi_fuzz 00:05:30.616 CC examples/nvme/hello_world/hello_world.o 00:05:30.616 CXX test/cpp_headers/fsdev_module.o 00:05:30.616 LINK accel_perf 00:05:30.616 CC test/lvol/esnap/esnap.o 00:05:30.616 LINK nvme_dp 00:05:30.616 CC test/event/scheduler/scheduler.o 00:05:30.616 LINK hello_blob 00:05:30.875 CXX test/cpp_headers/ftl.o 00:05:30.875 CC test/bdev/bdevio/bdevio.o 00:05:30.875 LINK hello_world 00:05:30.875 LINK blobcli 00:05:30.875 CC test/nvme/overhead/overhead.o 00:05:30.875 CC examples/nvme/reconnect/reconnect.o 00:05:30.875 CC test/nvme/err_injection/err_injection.o 00:05:30.875 LINK scheduler 00:05:31.133 CC test/nvme/startup/startup.o 00:05:31.133 CXX test/cpp_headers/fuse_dispatcher.o 00:05:31.133 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:31.133 CXX test/cpp_headers/gpt_spec.o 00:05:31.133 LINK err_injection 00:05:31.133 CC examples/nvme/arbitration/arbitration.o 00:05:31.133 LINK startup 00:05:31.391 LINK reconnect 00:05:31.391 LINK overhead 00:05:31.391 CC examples/nvme/hotplug/hotplug.o 00:05:31.391 LINK bdevio 00:05:31.391 CXX test/cpp_headers/hexlify.o 00:05:31.391 CXX test/cpp_headers/histogram_data.o 00:05:31.391 CC test/nvme/reserve/reserve.o 00:05:31.649 CC examples/bdev/hello_world/hello_bdev.o 00:05:31.649 CXX test/cpp_headers/idxd.o 00:05:31.649 LINK hotplug 00:05:31.649 CC examples/bdev/bdevperf/bdevperf.o 00:05:31.649 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:31.649 LINK arbitration 00:05:31.649 CC examples/nvme/abort/abort.o 00:05:31.649 LINK reserve 00:05:31.908 CXX test/cpp_headers/idxd_spec.o 00:05:31.908 LINK cmb_copy 00:05:31.908 LINK nvme_manage 00:05:31.908 LINK hello_bdev 00:05:31.908 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:31.908 CC test/nvme/simple_copy/simple_copy.o 00:05:31.908 CXX test/cpp_headers/init.o 00:05:31.908 CC test/nvme/connect_stress/connect_stress.o 00:05:31.908 CXX test/cpp_headers/ioat.o 00:05:32.166 CXX test/cpp_headers/ioat_spec.o 00:05:32.166 LINK pmr_persistence 00:05:32.166 LINK abort 00:05:32.166 CC test/nvme/boot_partition/boot_partition.o 00:05:32.166 CXX test/cpp_headers/iscsi_spec.o 00:05:32.166 LINK simple_copy 00:05:32.166 CXX test/cpp_headers/json.o 00:05:32.166 LINK connect_stress 00:05:32.166 CC test/nvme/compliance/nvme_compliance.o 00:05:32.425 CXX test/cpp_headers/jsonrpc.o 00:05:32.425 LINK boot_partition 00:05:32.425 CXX test/cpp_headers/keyring.o 00:05:32.425 CC test/nvme/fused_ordering/fused_ordering.o 00:05:32.425 CXX test/cpp_headers/keyring_module.o 00:05:32.425 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:32.425 CC test/nvme/fdp/fdp.o 00:05:32.684 CXX test/cpp_headers/likely.o 00:05:32.684 CXX test/cpp_headers/log.o 00:05:32.684 LINK bdevperf 00:05:32.684 CC test/nvme/cuse/cuse.o 00:05:32.684 CXX test/cpp_headers/lvol.o 00:05:32.684 LINK doorbell_aers 00:05:32.684 LINK fused_ordering 00:05:32.684 LINK nvme_compliance 00:05:32.684 CXX test/cpp_headers/md5.o 00:05:32.684 CXX test/cpp_headers/memory.o 00:05:32.684 CXX test/cpp_headers/mmio.o 00:05:32.684 CXX test/cpp_headers/nbd.o 00:05:32.943 CXX test/cpp_headers/net.o 00:05:32.943 CXX test/cpp_headers/notify.o 00:05:32.943 CXX test/cpp_headers/nvme.o 00:05:32.943 CXX test/cpp_headers/nvme_intel.o 00:05:32.943 CXX test/cpp_headers/nvme_ocssd.o 00:05:32.943 LINK fdp 00:05:32.943 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:32.943 CXX test/cpp_headers/nvme_spec.o 00:05:32.943 CC examples/nvmf/nvmf/nvmf.o 00:05:32.943 CXX test/cpp_headers/nvme_zns.o 00:05:32.943 CXX test/cpp_headers/nvmf_cmd.o 00:05:33.203 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:33.203 CXX test/cpp_headers/nvmf.o 00:05:33.203 CXX test/cpp_headers/nvmf_spec.o 00:05:33.203 CXX test/cpp_headers/nvmf_transport.o 00:05:33.203 CXX test/cpp_headers/opal.o 00:05:33.203 CXX test/cpp_headers/opal_spec.o 00:05:33.203 CXX test/cpp_headers/pci_ids.o 00:05:33.203 CXX test/cpp_headers/pipe.o 00:05:33.203 CXX test/cpp_headers/queue.o 00:05:33.203 CXX test/cpp_headers/reduce.o 00:05:33.203 LINK nvmf 00:05:33.203 CXX test/cpp_headers/rpc.o 00:05:33.462 CXX test/cpp_headers/scheduler.o 00:05:33.462 CXX test/cpp_headers/scsi.o 00:05:33.462 CXX test/cpp_headers/scsi_spec.o 00:05:33.462 CXX test/cpp_headers/sock.o 00:05:33.462 CXX test/cpp_headers/stdinc.o 00:05:33.462 CXX test/cpp_headers/string.o 00:05:33.462 CXX test/cpp_headers/thread.o 00:05:33.462 CXX test/cpp_headers/trace.o 00:05:33.462 CXX test/cpp_headers/trace_parser.o 00:05:33.721 CXX test/cpp_headers/tree.o 00:05:33.721 CXX test/cpp_headers/ublk.o 00:05:33.721 CXX test/cpp_headers/util.o 00:05:33.722 CXX test/cpp_headers/uuid.o 00:05:33.722 CXX test/cpp_headers/version.o 00:05:33.722 CXX test/cpp_headers/vfio_user_pci.o 00:05:33.722 CXX test/cpp_headers/vfio_user_spec.o 00:05:33.722 CXX test/cpp_headers/vhost.o 00:05:33.722 CXX test/cpp_headers/vmd.o 00:05:33.722 CXX test/cpp_headers/xor.o 00:05:33.722 CXX test/cpp_headers/zipf.o 00:05:34.290 LINK cuse 00:05:36.846 LINK esnap 00:05:37.106 00:05:37.106 real 1m31.544s 00:05:37.106 user 7m9.536s 00:05:37.106 sys 1m17.543s 00:05:37.106 03:14:24 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:37.106 03:14:24 make -- common/autotest_common.sh@10 -- $ set +x 00:05:37.106 ************************************ 00:05:37.106 END TEST make 00:05:37.106 ************************************ 00:05:37.367 03:14:24 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:37.367 03:14:24 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:37.367 03:14:24 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:37.367 03:14:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:37.367 03:14:24 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:37.367 03:14:24 -- pm/common@44 -- $ pid=6211 00:05:37.367 03:14:24 -- pm/common@50 -- $ kill -TERM 6211 00:05:37.367 03:14:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:37.367 03:14:24 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:37.367 03:14:24 -- pm/common@44 -- $ pid=6213 00:05:37.367 03:14:24 -- pm/common@50 -- $ kill -TERM 6213 00:05:37.367 03:14:24 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:37.367 03:14:24 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:37.367 03:14:24 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:37.367 03:14:24 -- common/autotest_common.sh@1693 -- # lcov --version 00:05:37.367 03:14:24 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:37.367 03:14:24 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:37.367 03:14:24 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.367 03:14:24 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.367 03:14:24 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.367 03:14:24 -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.367 03:14:24 -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.367 03:14:24 -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.367 03:14:24 -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.367 03:14:24 -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.367 03:14:24 -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.367 03:14:24 -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.367 03:14:24 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.367 03:14:24 -- scripts/common.sh@344 -- # case "$op" in 00:05:37.367 03:14:24 -- scripts/common.sh@345 -- # : 1 00:05:37.367 03:14:24 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.367 03:14:24 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.367 03:14:24 -- scripts/common.sh@365 -- # decimal 1 00:05:37.367 03:14:24 -- scripts/common.sh@353 -- # local d=1 00:05:37.367 03:14:24 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.367 03:14:24 -- scripts/common.sh@355 -- # echo 1 00:05:37.367 03:14:24 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.367 03:14:24 -- scripts/common.sh@366 -- # decimal 2 00:05:37.367 03:14:24 -- scripts/common.sh@353 -- # local d=2 00:05:37.367 03:14:24 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.367 03:14:24 -- scripts/common.sh@355 -- # echo 2 00:05:37.367 03:14:24 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.367 03:14:24 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.367 03:14:24 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.367 03:14:24 -- scripts/common.sh@368 -- # return 0 00:05:37.367 03:14:24 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.367 03:14:24 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:37.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.367 --rc genhtml_branch_coverage=1 00:05:37.367 --rc genhtml_function_coverage=1 00:05:37.367 --rc genhtml_legend=1 00:05:37.367 --rc geninfo_all_blocks=1 00:05:37.367 --rc geninfo_unexecuted_blocks=1 00:05:37.367 00:05:37.367 ' 00:05:37.367 03:14:24 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:37.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.367 --rc genhtml_branch_coverage=1 00:05:37.367 --rc genhtml_function_coverage=1 00:05:37.367 --rc genhtml_legend=1 00:05:37.367 --rc geninfo_all_blocks=1 00:05:37.367 --rc geninfo_unexecuted_blocks=1 00:05:37.367 00:05:37.367 ' 00:05:37.628 03:14:24 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:37.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.628 --rc genhtml_branch_coverage=1 00:05:37.628 --rc genhtml_function_coverage=1 00:05:37.628 --rc genhtml_legend=1 00:05:37.628 --rc geninfo_all_blocks=1 00:05:37.628 --rc geninfo_unexecuted_blocks=1 00:05:37.628 00:05:37.628 ' 00:05:37.628 03:14:24 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:37.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.628 --rc genhtml_branch_coverage=1 00:05:37.628 --rc genhtml_function_coverage=1 00:05:37.628 --rc genhtml_legend=1 00:05:37.628 --rc geninfo_all_blocks=1 00:05:37.628 --rc geninfo_unexecuted_blocks=1 00:05:37.628 00:05:37.628 ' 00:05:37.628 03:14:24 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:37.628 03:14:24 -- nvmf/common.sh@7 -- # uname -s 00:05:37.628 03:14:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:37.628 03:14:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:37.628 03:14:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:37.628 03:14:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:37.628 03:14:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:37.628 03:14:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:37.628 03:14:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:37.628 03:14:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:37.628 03:14:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:37.628 03:14:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:37.628 03:14:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:86a728fc-24bd-4818-abba-33fbc8c192df 00:05:37.628 03:14:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=86a728fc-24bd-4818-abba-33fbc8c192df 00:05:37.628 03:14:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:37.628 03:14:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:37.628 03:14:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:37.628 03:14:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:37.628 03:14:24 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:37.628 03:14:24 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:37.628 03:14:24 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:37.628 03:14:24 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:37.628 03:14:24 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:37.628 03:14:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.628 03:14:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.628 03:14:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.628 03:14:24 -- paths/export.sh@5 -- # export PATH 00:05:37.628 03:14:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.628 03:14:24 -- nvmf/common.sh@51 -- # : 0 00:05:37.628 03:14:24 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:37.628 03:14:24 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:37.628 03:14:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:37.628 03:14:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:37.628 03:14:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:37.628 03:14:24 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:37.628 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:37.628 03:14:24 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:37.628 03:14:24 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:37.628 03:14:24 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:37.628 03:14:24 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:37.628 03:14:24 -- spdk/autotest.sh@32 -- # uname -s 00:05:37.628 03:14:24 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:37.628 03:14:24 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:37.628 03:14:24 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:37.628 03:14:24 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:37.628 03:14:24 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:37.628 03:14:24 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:37.628 03:14:25 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:37.628 03:14:25 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:37.628 03:14:25 -- spdk/autotest.sh@48 -- # udevadm_pid=68612 00:05:37.628 03:14:25 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:37.628 03:14:25 -- pm/common@17 -- # local monitor 00:05:37.628 03:14:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:37.628 03:14:25 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:37.628 03:14:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:37.628 03:14:25 -- pm/common@25 -- # sleep 1 00:05:37.628 03:14:25 -- pm/common@21 -- # date +%s 00:05:37.628 03:14:25 -- pm/common@21 -- # date +%s 00:05:37.628 03:14:25 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732158865 00:05:37.628 03:14:25 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732158865 00:05:37.628 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732158865_collect-cpu-load.pm.log 00:05:37.628 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732158865_collect-vmstat.pm.log 00:05:38.569 03:14:26 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:38.569 03:14:26 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:38.569 03:14:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:38.569 03:14:26 -- common/autotest_common.sh@10 -- # set +x 00:05:38.569 03:14:26 -- spdk/autotest.sh@59 -- # create_test_list 00:05:38.569 03:14:26 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:38.569 03:14:26 -- common/autotest_common.sh@10 -- # set +x 00:05:38.569 03:14:26 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:38.569 03:14:26 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:38.569 03:14:26 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:38.569 03:14:26 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:38.569 03:14:26 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:38.569 03:14:26 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:38.569 03:14:26 -- common/autotest_common.sh@1457 -- # uname 00:05:38.569 03:14:26 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:38.569 03:14:26 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:38.829 03:14:26 -- common/autotest_common.sh@1477 -- # uname 00:05:38.829 03:14:26 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:38.829 03:14:26 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:38.829 03:14:26 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:38.829 lcov: LCOV version 1.15 00:05:38.829 03:14:26 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:56.926 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:56.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:09.164 03:14:56 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:09.164 03:14:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:09.164 03:14:56 -- common/autotest_common.sh@10 -- # set +x 00:06:09.164 03:14:56 -- spdk/autotest.sh@78 -- # rm -f 00:06:09.164 03:14:56 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:10.101 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:10.101 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:10.101 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:10.101 03:14:57 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:10.101 03:14:57 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:10.101 03:14:57 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:10.101 03:14:57 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:10.101 03:14:57 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:10.101 03:14:57 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:10.101 03:14:57 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:10.101 03:14:57 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:10.101 03:14:57 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:10.101 03:14:57 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:10.101 03:14:57 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:10.101 03:14:57 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:10.101 03:14:57 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:10.101 03:14:57 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:10.101 03:14:57 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:10.101 03:14:57 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:06:10.101 03:14:57 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:06:10.101 03:14:57 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:10.101 03:14:57 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:10.101 03:14:57 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:10.101 03:14:57 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:06:10.101 03:14:57 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:06:10.101 03:14:57 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:10.101 03:14:57 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:10.101 03:14:57 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:10.101 03:14:57 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:10.101 03:14:57 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:10.101 03:14:57 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:10.101 03:14:57 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:10.101 03:14:57 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:10.101 No valid GPT data, bailing 00:06:10.101 03:14:57 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:10.101 03:14:57 -- scripts/common.sh@394 -- # pt= 00:06:10.101 03:14:57 -- scripts/common.sh@395 -- # return 1 00:06:10.101 03:14:57 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:10.101 1+0 records in 00:06:10.101 1+0 records out 00:06:10.101 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00625445 s, 168 MB/s 00:06:10.101 03:14:57 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:10.101 03:14:57 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:10.101 03:14:57 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:10.101 03:14:57 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:10.101 03:14:57 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:10.361 No valid GPT data, bailing 00:06:10.361 03:14:57 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:10.361 03:14:57 -- scripts/common.sh@394 -- # pt= 00:06:10.361 03:14:57 -- scripts/common.sh@395 -- # return 1 00:06:10.361 03:14:57 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:10.361 1+0 records in 00:06:10.361 1+0 records out 00:06:10.361 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0063108 s, 166 MB/s 00:06:10.361 03:14:57 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:10.361 03:14:57 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:10.361 03:14:57 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:06:10.361 03:14:57 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:06:10.361 03:14:57 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:10.361 No valid GPT data, bailing 00:06:10.361 03:14:57 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:10.361 03:14:57 -- scripts/common.sh@394 -- # pt= 00:06:10.361 03:14:57 -- scripts/common.sh@395 -- # return 1 00:06:10.361 03:14:57 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:10.361 1+0 records in 00:06:10.361 1+0 records out 00:06:10.361 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00477919 s, 219 MB/s 00:06:10.361 03:14:57 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:10.361 03:14:57 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:10.361 03:14:57 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:06:10.361 03:14:57 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:06:10.361 03:14:57 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:10.361 No valid GPT data, bailing 00:06:10.361 03:14:57 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:10.361 03:14:57 -- scripts/common.sh@394 -- # pt= 00:06:10.361 03:14:57 -- scripts/common.sh@395 -- # return 1 00:06:10.361 03:14:57 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:10.361 1+0 records in 00:06:10.361 1+0 records out 00:06:10.361 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00608161 s, 172 MB/s 00:06:10.361 03:14:57 -- spdk/autotest.sh@105 -- # sync 00:06:10.620 03:14:58 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:10.620 03:14:58 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:10.620 03:14:58 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:13.907 03:15:00 -- spdk/autotest.sh@111 -- # uname -s 00:06:13.907 03:15:00 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:13.907 03:15:00 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:13.907 03:15:00 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:14.166 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:14.166 Hugepages 00:06:14.166 node hugesize free / total 00:06:14.166 node0 1048576kB 0 / 0 00:06:14.166 node0 2048kB 0 / 0 00:06:14.166 00:06:14.166 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:14.424 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:14.424 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:14.425 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:14.425 03:15:01 -- spdk/autotest.sh@117 -- # uname -s 00:06:14.425 03:15:01 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:14.425 03:15:01 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:14.425 03:15:01 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:15.363 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:15.363 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:15.621 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:15.621 03:15:03 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:16.588 03:15:04 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:16.588 03:15:04 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:16.588 03:15:04 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:16.588 03:15:04 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:16.588 03:15:04 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:16.588 03:15:04 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:16.588 03:15:04 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:16.588 03:15:04 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:16.588 03:15:04 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:16.588 03:15:04 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:16.588 03:15:04 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:16.588 03:15:04 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:17.156 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:17.156 Waiting for block devices as requested 00:06:17.156 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:17.156 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:17.414 03:15:04 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:17.414 03:15:04 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:17.414 03:15:04 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:17.414 03:15:04 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:06:17.414 03:15:04 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:17.414 03:15:04 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:17.414 03:15:04 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:17.414 03:15:04 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:06:17.414 03:15:04 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:06:17.414 03:15:04 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:06:17.414 03:15:04 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:06:17.414 03:15:04 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:17.414 03:15:04 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:17.414 03:15:04 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:17.414 03:15:04 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:17.414 03:15:04 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:17.414 03:15:04 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:06:17.414 03:15:04 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:17.414 03:15:04 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:17.414 03:15:04 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:17.414 03:15:04 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:17.414 03:15:04 -- common/autotest_common.sh@1543 -- # continue 00:06:17.414 03:15:04 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:17.414 03:15:04 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:17.414 03:15:04 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:17.414 03:15:04 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:06:17.415 03:15:04 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:17.415 03:15:04 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:17.415 03:15:04 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:17.415 03:15:04 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:17.415 03:15:04 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:17.415 03:15:04 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:17.415 03:15:04 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:17.415 03:15:04 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:17.415 03:15:04 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:17.415 03:15:04 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:17.415 03:15:04 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:17.415 03:15:04 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:17.415 03:15:04 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:17.415 03:15:04 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:17.415 03:15:04 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:17.415 03:15:04 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:17.415 03:15:04 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:17.415 03:15:04 -- common/autotest_common.sh@1543 -- # continue 00:06:17.415 03:15:04 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:17.415 03:15:04 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:17.415 03:15:04 -- common/autotest_common.sh@10 -- # set +x 00:06:17.415 03:15:04 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:17.415 03:15:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:17.415 03:15:04 -- common/autotest_common.sh@10 -- # set +x 00:06:17.415 03:15:04 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:18.351 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:18.351 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:18.351 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:18.351 03:15:05 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:18.351 03:15:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:18.351 03:15:05 -- common/autotest_common.sh@10 -- # set +x 00:06:18.611 03:15:05 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:18.611 03:15:05 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:18.611 03:15:05 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:18.611 03:15:05 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:18.611 03:15:05 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:18.611 03:15:05 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:18.611 03:15:05 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:18.611 03:15:05 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:18.611 03:15:05 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:18.611 03:15:05 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:18.611 03:15:05 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:18.611 03:15:05 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:18.611 03:15:05 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:18.611 03:15:06 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:18.611 03:15:06 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:18.611 03:15:06 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:18.611 03:15:06 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:18.611 03:15:06 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:18.611 03:15:06 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:18.611 03:15:06 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:18.611 03:15:06 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:18.611 03:15:06 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:18.611 03:15:06 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:18.611 03:15:06 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:06:18.611 03:15:06 -- common/autotest_common.sh@1572 -- # return 0 00:06:18.611 03:15:06 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:06:18.611 03:15:06 -- common/autotest_common.sh@1580 -- # return 0 00:06:18.611 03:15:06 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:18.611 03:15:06 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:18.611 03:15:06 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:18.611 03:15:06 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:18.611 03:15:06 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:18.611 03:15:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:18.611 03:15:06 -- common/autotest_common.sh@10 -- # set +x 00:06:18.611 03:15:06 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:18.611 03:15:06 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:18.611 03:15:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.611 03:15:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.611 03:15:06 -- common/autotest_common.sh@10 -- # set +x 00:06:18.611 ************************************ 00:06:18.611 START TEST env 00:06:18.611 ************************************ 00:06:18.611 03:15:06 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:18.611 * Looking for test storage... 00:06:18.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:18.870 03:15:06 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:18.870 03:15:06 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:18.870 03:15:06 env -- common/autotest_common.sh@1693 -- # lcov --version 00:06:18.870 03:15:06 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:18.870 03:15:06 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.870 03:15:06 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.870 03:15:06 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.870 03:15:06 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.870 03:15:06 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.870 03:15:06 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.870 03:15:06 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.870 03:15:06 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.870 03:15:06 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.870 03:15:06 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.870 03:15:06 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.870 03:15:06 env -- scripts/common.sh@344 -- # case "$op" in 00:06:18.870 03:15:06 env -- scripts/common.sh@345 -- # : 1 00:06:18.870 03:15:06 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.870 03:15:06 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.870 03:15:06 env -- scripts/common.sh@365 -- # decimal 1 00:06:18.870 03:15:06 env -- scripts/common.sh@353 -- # local d=1 00:06:18.870 03:15:06 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.870 03:15:06 env -- scripts/common.sh@355 -- # echo 1 00:06:18.870 03:15:06 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.870 03:15:06 env -- scripts/common.sh@366 -- # decimal 2 00:06:18.870 03:15:06 env -- scripts/common.sh@353 -- # local d=2 00:06:18.870 03:15:06 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.870 03:15:06 env -- scripts/common.sh@355 -- # echo 2 00:06:18.870 03:15:06 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.870 03:15:06 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.870 03:15:06 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.870 03:15:06 env -- scripts/common.sh@368 -- # return 0 00:06:18.870 03:15:06 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.870 03:15:06 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:18.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.870 --rc genhtml_branch_coverage=1 00:06:18.870 --rc genhtml_function_coverage=1 00:06:18.870 --rc genhtml_legend=1 00:06:18.870 --rc geninfo_all_blocks=1 00:06:18.870 --rc geninfo_unexecuted_blocks=1 00:06:18.870 00:06:18.870 ' 00:06:18.870 03:15:06 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:18.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.870 --rc genhtml_branch_coverage=1 00:06:18.870 --rc genhtml_function_coverage=1 00:06:18.870 --rc genhtml_legend=1 00:06:18.870 --rc geninfo_all_blocks=1 00:06:18.870 --rc geninfo_unexecuted_blocks=1 00:06:18.870 00:06:18.870 ' 00:06:18.870 03:15:06 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:18.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.870 --rc genhtml_branch_coverage=1 00:06:18.870 --rc genhtml_function_coverage=1 00:06:18.870 --rc genhtml_legend=1 00:06:18.870 --rc geninfo_all_blocks=1 00:06:18.870 --rc geninfo_unexecuted_blocks=1 00:06:18.870 00:06:18.870 ' 00:06:18.870 03:15:06 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:18.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.870 --rc genhtml_branch_coverage=1 00:06:18.870 --rc genhtml_function_coverage=1 00:06:18.870 --rc genhtml_legend=1 00:06:18.870 --rc geninfo_all_blocks=1 00:06:18.870 --rc geninfo_unexecuted_blocks=1 00:06:18.870 00:06:18.870 ' 00:06:18.870 03:15:06 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:18.870 03:15:06 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.870 03:15:06 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.870 03:15:06 env -- common/autotest_common.sh@10 -- # set +x 00:06:18.870 ************************************ 00:06:18.870 START TEST env_memory 00:06:18.870 ************************************ 00:06:18.870 03:15:06 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:18.870 00:06:18.870 00:06:18.870 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.870 http://cunit.sourceforge.net/ 00:06:18.871 00:06:18.871 00:06:18.871 Suite: memory 00:06:18.871 Test: alloc and free memory map ...[2024-11-21 03:15:06.334416] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:18.871 passed 00:06:18.871 Test: mem map translation ...[2024-11-21 03:15:06.379230] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:18.871 [2024-11-21 03:15:06.379286] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:18.871 [2024-11-21 03:15:06.379365] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:18.871 [2024-11-21 03:15:06.379410] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:19.130 passed 00:06:19.130 Test: mem map registration ...[2024-11-21 03:15:06.447062] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:19.130 [2024-11-21 03:15:06.447123] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:19.130 passed 00:06:19.130 Test: mem map adjacent registrations ...passed 00:06:19.130 00:06:19.130 Run Summary: Type Total Ran Passed Failed Inactive 00:06:19.130 suites 1 1 n/a 0 0 00:06:19.130 tests 4 4 4 0 0 00:06:19.130 asserts 152 152 152 0 n/a 00:06:19.130 00:06:19.130 Elapsed time = 0.248 seconds 00:06:19.130 00:06:19.130 real 0m0.289s 00:06:19.130 user 0m0.257s 00:06:19.130 sys 0m0.025s 00:06:19.130 03:15:06 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.130 03:15:06 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:19.130 ************************************ 00:06:19.130 END TEST env_memory 00:06:19.130 ************************************ 00:06:19.130 03:15:06 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:19.130 03:15:06 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.130 03:15:06 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.130 03:15:06 env -- common/autotest_common.sh@10 -- # set +x 00:06:19.130 ************************************ 00:06:19.130 START TEST env_vtophys 00:06:19.130 ************************************ 00:06:19.130 03:15:06 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:19.130 EAL: lib.eal log level changed from notice to debug 00:06:19.130 EAL: Detected lcore 0 as core 0 on socket 0 00:06:19.130 EAL: Detected lcore 1 as core 0 on socket 0 00:06:19.130 EAL: Detected lcore 2 as core 0 on socket 0 00:06:19.130 EAL: Detected lcore 3 as core 0 on socket 0 00:06:19.130 EAL: Detected lcore 4 as core 0 on socket 0 00:06:19.130 EAL: Detected lcore 5 as core 0 on socket 0 00:06:19.130 EAL: Detected lcore 6 as core 0 on socket 0 00:06:19.130 EAL: Detected lcore 7 as core 0 on socket 0 00:06:19.130 EAL: Detected lcore 8 as core 0 on socket 0 00:06:19.130 EAL: Detected lcore 9 as core 0 on socket 0 00:06:19.130 EAL: Maximum logical cores by configuration: 128 00:06:19.130 EAL: Detected CPU lcores: 10 00:06:19.130 EAL: Detected NUMA nodes: 1 00:06:19.130 EAL: Checking presence of .so 'librte_eal.so.25.0' 00:06:19.130 EAL: Detected shared linkage of DPDK 00:06:19.130 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25.0 00:06:19.130 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25.0 00:06:19.130 EAL: Registered [vdev] bus. 00:06:19.130 EAL: bus.vdev log level changed from disabled to notice 00:06:19.130 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25.0 00:06:19.130 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25.0 00:06:19.130 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:19.130 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:19.130 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so.25.0 00:06:19.130 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so.25.0 00:06:19.130 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so.25.0 00:06:19.130 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so.25.0 00:06:19.130 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so.25.0 00:06:19.130 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so.25.0 00:06:19.130 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:06:19.130 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:06:19.130 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:06:19.130 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:06:19.130 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so 00:06:19.130 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so 00:06:19.130 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so 00:06:19.130 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so 00:06:19.130 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so 00:06:19.130 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so 00:06:19.389 EAL: No shared files mode enabled, IPC will be disabled 00:06:19.389 EAL: No shared files mode enabled, IPC is disabled 00:06:19.389 EAL: Selected IOVA mode 'PA' 00:06:19.389 EAL: Probing VFIO support... 00:06:19.389 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:19.389 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:19.389 EAL: Ask a virtual area of 0x2e000 bytes 00:06:19.389 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:19.389 EAL: Setting up physically contiguous memory... 00:06:19.389 EAL: Setting maximum number of open files to 524288 00:06:19.389 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:19.389 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:19.389 EAL: Ask a virtual area of 0x61000 bytes 00:06:19.389 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:19.389 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:19.389 EAL: Ask a virtual area of 0x400000000 bytes 00:06:19.389 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:19.389 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:19.389 EAL: Ask a virtual area of 0x61000 bytes 00:06:19.389 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:19.389 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:19.389 EAL: Ask a virtual area of 0x400000000 bytes 00:06:19.389 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:19.389 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:19.389 EAL: Ask a virtual area of 0x61000 bytes 00:06:19.389 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:19.389 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:19.389 EAL: Ask a virtual area of 0x400000000 bytes 00:06:19.389 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:19.389 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:19.389 EAL: Ask a virtual area of 0x61000 bytes 00:06:19.389 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:19.389 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:19.389 EAL: Ask a virtual area of 0x400000000 bytes 00:06:19.389 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:19.389 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:19.389 EAL: Hugepages will be freed exactly as allocated. 00:06:19.389 EAL: No shared files mode enabled, IPC is disabled 00:06:19.389 EAL: No shared files mode enabled, IPC is disabled 00:06:19.389 EAL: TSC frequency is ~2294600 KHz 00:06:19.389 EAL: Main lcore 0 is ready (tid=7f1c41c40a40;cpuset=[0]) 00:06:19.389 EAL: Trying to obtain current memory policy. 00:06:19.389 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.389 EAL: Restoring previous memory policy: 0 00:06:19.389 EAL: request: mp_malloc_sync 00:06:19.389 EAL: No shared files mode enabled, IPC is disabled 00:06:19.389 EAL: Heap on socket 0 was expanded by 2MB 00:06:19.389 EAL: Allocated 2112 bytes of per-lcore data with a 64-byte alignment 00:06:19.389 EAL: No shared files mode enabled, IPC is disabled 00:06:19.389 EAL: Mem event callback 'spdk:(nil)' registered 00:06:19.389 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:19.389 00:06:19.389 00:06:19.389 CUnit - A unit testing framework for C - Version 2.1-3 00:06:19.389 http://cunit.sourceforge.net/ 00:06:19.389 00:06:19.389 00:06:19.389 Suite: components_suite 00:06:19.957 Test: vtophys_malloc_test ...passed 00:06:19.957 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:19.957 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.957 EAL: Restoring previous memory policy: 4 00:06:19.957 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.957 EAL: request: mp_malloc_sync 00:06:19.957 EAL: No shared files mode enabled, IPC is disabled 00:06:19.957 EAL: Heap on socket 0 was expanded by 4MB 00:06:19.957 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.957 EAL: request: mp_malloc_sync 00:06:19.957 EAL: No shared files mode enabled, IPC is disabled 00:06:19.957 EAL: Heap on socket 0 was shrunk by 4MB 00:06:19.957 EAL: Trying to obtain current memory policy. 00:06:19.957 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.957 EAL: Restoring previous memory policy: 4 00:06:19.957 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.957 EAL: request: mp_malloc_sync 00:06:19.957 EAL: No shared files mode enabled, IPC is disabled 00:06:19.957 EAL: Heap on socket 0 was expanded by 6MB 00:06:19.957 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.957 EAL: request: mp_malloc_sync 00:06:19.957 EAL: No shared files mode enabled, IPC is disabled 00:06:19.957 EAL: Heap on socket 0 was shrunk by 6MB 00:06:19.957 EAL: Trying to obtain current memory policy. 00:06:19.957 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.957 EAL: Restoring previous memory policy: 4 00:06:19.957 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.957 EAL: request: mp_malloc_sync 00:06:19.957 EAL: No shared files mode enabled, IPC is disabled 00:06:19.957 EAL: Heap on socket 0 was expanded by 10MB 00:06:19.957 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.957 EAL: request: mp_malloc_sync 00:06:19.957 EAL: No shared files mode enabled, IPC is disabled 00:06:19.957 EAL: Heap on socket 0 was shrunk by 10MB 00:06:19.957 EAL: Trying to obtain current memory policy. 00:06:19.957 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.957 EAL: Restoring previous memory policy: 4 00:06:19.957 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.957 EAL: request: mp_malloc_sync 00:06:19.957 EAL: No shared files mode enabled, IPC is disabled 00:06:19.957 EAL: Heap on socket 0 was expanded by 18MB 00:06:19.957 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.957 EAL: request: mp_malloc_sync 00:06:19.957 EAL: No shared files mode enabled, IPC is disabled 00:06:19.957 EAL: Heap on socket 0 was shrunk by 18MB 00:06:19.957 EAL: Trying to obtain current memory policy. 00:06:19.957 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.957 EAL: Restoring previous memory policy: 4 00:06:19.957 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.957 EAL: request: mp_malloc_sync 00:06:19.957 EAL: No shared files mode enabled, IPC is disabled 00:06:19.957 EAL: Heap on socket 0 was expanded by 34MB 00:06:19.957 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.957 EAL: request: mp_malloc_sync 00:06:19.957 EAL: No shared files mode enabled, IPC is disabled 00:06:19.957 EAL: Heap on socket 0 was shrunk by 34MB 00:06:19.957 EAL: Trying to obtain current memory policy. 00:06:19.957 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.957 EAL: Restoring previous memory policy: 4 00:06:19.957 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.957 EAL: request: mp_malloc_sync 00:06:19.957 EAL: No shared files mode enabled, IPC is disabled 00:06:19.957 EAL: Heap on socket 0 was expanded by 66MB 00:06:19.957 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.957 EAL: request: mp_malloc_sync 00:06:19.957 EAL: No shared files mode enabled, IPC is disabled 00:06:19.957 EAL: Heap on socket 0 was shrunk by 66MB 00:06:19.957 EAL: Trying to obtain current memory policy. 00:06:19.957 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.957 EAL: Restoring previous memory policy: 4 00:06:19.957 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.957 EAL: request: mp_malloc_sync 00:06:19.957 EAL: No shared files mode enabled, IPC is disabled 00:06:19.957 EAL: Heap on socket 0 was expanded by 130MB 00:06:20.216 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.216 EAL: request: mp_malloc_sync 00:06:20.216 EAL: No shared files mode enabled, IPC is disabled 00:06:20.216 EAL: Heap on socket 0 was shrunk by 130MB 00:06:20.216 EAL: Trying to obtain current memory policy. 00:06:20.216 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:20.216 EAL: Restoring previous memory policy: 4 00:06:20.217 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.217 EAL: request: mp_malloc_sync 00:06:20.217 EAL: No shared files mode enabled, IPC is disabled 00:06:20.217 EAL: Heap on socket 0 was expanded by 258MB 00:06:20.217 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.475 EAL: request: mp_malloc_sync 00:06:20.475 EAL: No shared files mode enabled, IPC is disabled 00:06:20.475 EAL: Heap on socket 0 was shrunk by 258MB 00:06:20.475 EAL: Trying to obtain current memory policy. 00:06:20.475 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:20.475 EAL: Restoring previous memory policy: 4 00:06:20.475 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.475 EAL: request: mp_malloc_sync 00:06:20.476 EAL: No shared files mode enabled, IPC is disabled 00:06:20.476 EAL: Heap on socket 0 was expanded by 514MB 00:06:20.734 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.993 EAL: request: mp_malloc_sync 00:06:20.993 EAL: No shared files mode enabled, IPC is disabled 00:06:20.993 EAL: Heap on socket 0 was shrunk by 514MB 00:06:20.993 EAL: Trying to obtain current memory policy. 00:06:20.993 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:21.252 EAL: Restoring previous memory policy: 4 00:06:21.252 EAL: Calling mem event callback 'spdk:(nil)' 00:06:21.252 EAL: request: mp_malloc_sync 00:06:21.252 EAL: No shared files mode enabled, IPC is disabled 00:06:21.252 EAL: Heap on socket 0 was expanded by 1026MB 00:06:21.510 EAL: Calling mem event callback 'spdk:(nil)' 00:06:21.783 passed 00:06:21.783 00:06:21.783 Run Summary: Type Total Ran Passed Failed Inactive 00:06:21.783 suites 1 1 n/a 0 0 00:06:21.783 tests 2 2 2 0 0 00:06:21.783 asserts 5449 5449 5449 0 n/a 00:06:21.783 00:06:21.783 Elapsed time = 2.469 seconds 00:06:21.783 EAL: request: mp_malloc_sync 00:06:21.783 EAL: No shared files mode enabled, IPC is disabled 00:06:21.783 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:21.783 EAL: Calling mem event callback 'spdk:(nil)' 00:06:21.783 EAL: request: mp_malloc_sync 00:06:21.783 EAL: No shared files mode enabled, IPC is disabled 00:06:21.783 EAL: Heap on socket 0 was shrunk by 2MB 00:06:21.784 EAL: No shared files mode enabled, IPC is disabled 00:06:21.784 EAL: No shared files mode enabled, IPC is disabled 00:06:21.784 EAL: No shared files mode enabled, IPC is disabled 00:06:22.046 00:06:22.046 real 0m2.747s 00:06:22.046 user 0m1.401s 00:06:22.046 sys 0m1.209s 00:06:22.046 03:15:09 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.046 03:15:09 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:22.046 ************************************ 00:06:22.046 END TEST env_vtophys 00:06:22.046 ************************************ 00:06:22.046 03:15:09 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:22.046 03:15:09 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.046 03:15:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.046 03:15:09 env -- common/autotest_common.sh@10 -- # set +x 00:06:22.046 ************************************ 00:06:22.046 START TEST env_pci 00:06:22.046 ************************************ 00:06:22.046 03:15:09 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:22.046 00:06:22.046 00:06:22.046 CUnit - A unit testing framework for C - Version 2.1-3 00:06:22.046 http://cunit.sourceforge.net/ 00:06:22.046 00:06:22.046 00:06:22.046 Suite: pci 00:06:22.046 Test: pci_hook ...[2024-11-21 03:15:09.484105] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 70906 has claimed it 00:06:22.046 EAL: Cannot find device (10000:00:01.0) 00:06:22.046 EAL: Failed to attach device on primary process 00:06:22.046 passed 00:06:22.046 00:06:22.046 Run Summary: Type Total Ran Passed Failed Inactive 00:06:22.046 suites 1 1 n/a 0 0 00:06:22.046 tests 1 1 1 0 0 00:06:22.046 asserts 25 25 25 0 n/a 00:06:22.046 00:06:22.046 Elapsed time = 0.009 seconds 00:06:22.046 00:06:22.046 real 0m0.129s 00:06:22.046 user 0m0.056s 00:06:22.046 sys 0m0.071s 00:06:22.046 03:15:09 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.046 03:15:09 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:22.046 ************************************ 00:06:22.046 END TEST env_pci 00:06:22.046 ************************************ 00:06:22.304 03:15:09 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:22.304 03:15:09 env -- env/env.sh@15 -- # uname 00:06:22.304 03:15:09 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:22.304 03:15:09 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:22.304 03:15:09 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:22.304 03:15:09 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:22.304 03:15:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.304 03:15:09 env -- common/autotest_common.sh@10 -- # set +x 00:06:22.304 ************************************ 00:06:22.304 START TEST env_dpdk_post_init 00:06:22.304 ************************************ 00:06:22.304 03:15:09 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:22.304 EAL: Detected CPU lcores: 10 00:06:22.304 EAL: Detected NUMA nodes: 1 00:06:22.304 EAL: Detected shared linkage of DPDK 00:06:22.304 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:22.304 EAL: Selected IOVA mode 'PA' 00:06:22.562 Starting DPDK initialization... 00:06:22.562 Starting SPDK post initialization... 00:06:22.562 SPDK NVMe probe 00:06:22.562 Attaching to 0000:00:10.0 00:06:22.562 Attaching to 0000:00:11.0 00:06:22.562 Attached to 0000:00:10.0 00:06:22.562 Attached to 0000:00:11.0 00:06:22.562 Cleaning up... 00:06:22.562 00:06:22.562 real 0m0.274s 00:06:22.562 user 0m0.075s 00:06:22.562 sys 0m0.100s 00:06:22.562 03:15:09 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.562 03:15:09 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:22.562 ************************************ 00:06:22.562 END TEST env_dpdk_post_init 00:06:22.562 ************************************ 00:06:22.562 03:15:09 env -- env/env.sh@26 -- # uname 00:06:22.562 03:15:09 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:22.562 03:15:09 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:22.562 03:15:09 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.562 03:15:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.562 03:15:09 env -- common/autotest_common.sh@10 -- # set +x 00:06:22.562 ************************************ 00:06:22.562 START TEST env_mem_callbacks 00:06:22.562 ************************************ 00:06:22.562 03:15:09 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:22.562 EAL: Detected CPU lcores: 10 00:06:22.562 EAL: Detected NUMA nodes: 1 00:06:22.562 EAL: Detected shared linkage of DPDK 00:06:22.562 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:22.562 EAL: Selected IOVA mode 'PA' 00:06:22.821 00:06:22.821 00:06:22.821 CUnit - A unit testing framework for C - Version 2.1-3 00:06:22.821 http://cunit.sourceforge.net/ 00:06:22.821 00:06:22.821 00:06:22.821 Suite: memory 00:06:22.821 Test: test ... 00:06:22.821 register 0x200000200000 2097152 00:06:22.821 malloc 3145728 00:06:22.821 register 0x200000400000 4194304 00:06:22.821 buf 0x200000500000 len 3145728 PASSED 00:06:22.821 malloc 64 00:06:22.821 buf 0x2000004fff40 len 64 PASSED 00:06:22.821 malloc 4194304 00:06:22.821 register 0x200000800000 6291456 00:06:22.821 buf 0x200000a00000 len 4194304 PASSED 00:06:22.821 free 0x200000500000 3145728 00:06:22.821 free 0x2000004fff40 64 00:06:22.821 unregister 0x200000400000 4194304 PASSED 00:06:22.821 free 0x200000a00000 4194304 00:06:22.821 unregister 0x200000800000 6291456 PASSED 00:06:22.821 malloc 8388608 00:06:22.821 register 0x200000400000 10485760 00:06:22.821 buf 0x200000600000 len 8388608 PASSED 00:06:22.821 free 0x200000600000 8388608 00:06:22.821 unregister 0x200000400000 10485760 PASSED 00:06:22.821 passed 00:06:22.821 00:06:22.821 Run Summary: Type Total Ran Passed Failed Inactive 00:06:22.821 suites 1 1 n/a 0 0 00:06:22.821 tests 1 1 1 0 0 00:06:22.821 asserts 15 15 15 0 n/a 00:06:22.821 00:06:22.821 Elapsed time = 0.014 seconds 00:06:22.821 00:06:22.821 real 0m0.214s 00:06:22.821 user 0m0.043s 00:06:22.821 sys 0m0.069s 00:06:22.821 03:15:10 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.821 03:15:10 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:22.821 ************************************ 00:06:22.821 END TEST env_mem_callbacks 00:06:22.821 ************************************ 00:06:22.821 00:06:22.821 real 0m4.196s 00:06:22.821 user 0m2.044s 00:06:22.821 sys 0m1.818s 00:06:22.821 03:15:10 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.821 03:15:10 env -- common/autotest_common.sh@10 -- # set +x 00:06:22.821 ************************************ 00:06:22.821 END TEST env 00:06:22.821 ************************************ 00:06:22.821 03:15:10 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:22.821 03:15:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.821 03:15:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.821 03:15:10 -- common/autotest_common.sh@10 -- # set +x 00:06:22.821 ************************************ 00:06:22.821 START TEST rpc 00:06:22.821 ************************************ 00:06:22.821 03:15:10 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:23.080 * Looking for test storage... 00:06:23.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:23.080 03:15:10 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:23.080 03:15:10 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:23.080 03:15:10 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:23.080 03:15:10 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:23.080 03:15:10 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.080 03:15:10 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.080 03:15:10 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.080 03:15:10 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.080 03:15:10 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.080 03:15:10 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.080 03:15:10 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.080 03:15:10 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.080 03:15:10 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.080 03:15:10 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.080 03:15:10 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.080 03:15:10 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:23.080 03:15:10 rpc -- scripts/common.sh@345 -- # : 1 00:06:23.080 03:15:10 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.080 03:15:10 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.080 03:15:10 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:23.080 03:15:10 rpc -- scripts/common.sh@353 -- # local d=1 00:06:23.080 03:15:10 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.080 03:15:10 rpc -- scripts/common.sh@355 -- # echo 1 00:06:23.080 03:15:10 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.080 03:15:10 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:23.080 03:15:10 rpc -- scripts/common.sh@353 -- # local d=2 00:06:23.080 03:15:10 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.080 03:15:10 rpc -- scripts/common.sh@355 -- # echo 2 00:06:23.080 03:15:10 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.080 03:15:10 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.080 03:15:10 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.080 03:15:10 rpc -- scripts/common.sh@368 -- # return 0 00:06:23.080 03:15:10 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.080 03:15:10 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:23.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.080 --rc genhtml_branch_coverage=1 00:06:23.080 --rc genhtml_function_coverage=1 00:06:23.080 --rc genhtml_legend=1 00:06:23.080 --rc geninfo_all_blocks=1 00:06:23.080 --rc geninfo_unexecuted_blocks=1 00:06:23.080 00:06:23.080 ' 00:06:23.080 03:15:10 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:23.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.080 --rc genhtml_branch_coverage=1 00:06:23.080 --rc genhtml_function_coverage=1 00:06:23.080 --rc genhtml_legend=1 00:06:23.080 --rc geninfo_all_blocks=1 00:06:23.080 --rc geninfo_unexecuted_blocks=1 00:06:23.080 00:06:23.080 ' 00:06:23.080 03:15:10 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:23.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.080 --rc genhtml_branch_coverage=1 00:06:23.080 --rc genhtml_function_coverage=1 00:06:23.080 --rc genhtml_legend=1 00:06:23.080 --rc geninfo_all_blocks=1 00:06:23.080 --rc geninfo_unexecuted_blocks=1 00:06:23.080 00:06:23.080 ' 00:06:23.080 03:15:10 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:23.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.080 --rc genhtml_branch_coverage=1 00:06:23.080 --rc genhtml_function_coverage=1 00:06:23.080 --rc genhtml_legend=1 00:06:23.080 --rc geninfo_all_blocks=1 00:06:23.080 --rc geninfo_unexecuted_blocks=1 00:06:23.080 00:06:23.080 ' 00:06:23.080 03:15:10 rpc -- rpc/rpc.sh@65 -- # spdk_pid=71033 00:06:23.080 03:15:10 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:23.081 03:15:10 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:23.081 03:15:10 rpc -- rpc/rpc.sh@67 -- # waitforlisten 71033 00:06:23.081 03:15:10 rpc -- common/autotest_common.sh@835 -- # '[' -z 71033 ']' 00:06:23.081 03:15:10 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.081 03:15:10 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.081 03:15:10 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.081 03:15:10 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.081 03:15:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.339 [2024-11-21 03:15:10.679984] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:23.339 [2024-11-21 03:15:10.680179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71033 ] 00:06:23.339 [2024-11-21 03:15:10.824777] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:23.339 [2024-11-21 03:15:10.862480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.597 [2024-11-21 03:15:10.906508] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:23.597 [2024-11-21 03:15:10.906588] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 71033' to capture a snapshot of events at runtime. 00:06:23.598 [2024-11-21 03:15:10.906600] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:23.598 [2024-11-21 03:15:10.906613] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:23.598 [2024-11-21 03:15:10.906621] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid71033 for offline analysis/debug. 00:06:23.598 [2024-11-21 03:15:10.907084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.165 03:15:11 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.165 03:15:11 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:24.165 03:15:11 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:24.165 03:15:11 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:24.165 03:15:11 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:24.165 03:15:11 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:24.165 03:15:11 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.165 03:15:11 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.166 03:15:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.166 ************************************ 00:06:24.166 START TEST rpc_integrity 00:06:24.166 ************************************ 00:06:24.166 03:15:11 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:24.166 03:15:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:24.166 03:15:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.166 03:15:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.166 03:15:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.166 03:15:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:24.166 03:15:11 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:24.166 03:15:11 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:24.166 03:15:11 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:24.166 03:15:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.166 03:15:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.166 03:15:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.166 03:15:11 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:24.166 03:15:11 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:24.166 03:15:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.166 03:15:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.166 03:15:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.166 03:15:11 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:24.166 { 00:06:24.166 "name": "Malloc0", 00:06:24.166 "aliases": [ 00:06:24.166 "22b22199-6b55-4e62-8eb8-74069ac95497" 00:06:24.166 ], 00:06:24.166 "product_name": "Malloc disk", 00:06:24.166 "block_size": 512, 00:06:24.166 "num_blocks": 16384, 00:06:24.166 "uuid": "22b22199-6b55-4e62-8eb8-74069ac95497", 00:06:24.166 "assigned_rate_limits": { 00:06:24.166 "rw_ios_per_sec": 0, 00:06:24.166 "rw_mbytes_per_sec": 0, 00:06:24.166 "r_mbytes_per_sec": 0, 00:06:24.166 "w_mbytes_per_sec": 0 00:06:24.166 }, 00:06:24.166 "claimed": false, 00:06:24.166 "zoned": false, 00:06:24.166 "supported_io_types": { 00:06:24.166 "read": true, 00:06:24.166 "write": true, 00:06:24.166 "unmap": true, 00:06:24.166 "flush": true, 00:06:24.166 "reset": true, 00:06:24.166 "nvme_admin": false, 00:06:24.166 "nvme_io": false, 00:06:24.166 "nvme_io_md": false, 00:06:24.166 "write_zeroes": true, 00:06:24.166 "zcopy": true, 00:06:24.166 "get_zone_info": false, 00:06:24.166 "zone_management": false, 00:06:24.166 "zone_append": false, 00:06:24.166 "compare": false, 00:06:24.166 "compare_and_write": false, 00:06:24.166 "abort": true, 00:06:24.166 "seek_hole": false, 00:06:24.166 "seek_data": false, 00:06:24.166 "copy": true, 00:06:24.166 "nvme_iov_md": false 00:06:24.166 }, 00:06:24.166 "memory_domains": [ 00:06:24.166 { 00:06:24.166 "dma_device_id": "system", 00:06:24.166 "dma_device_type": 1 00:06:24.166 }, 00:06:24.166 { 00:06:24.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:24.166 "dma_device_type": 2 00:06:24.166 } 00:06:24.166 ], 00:06:24.166 "driver_specific": {} 00:06:24.166 } 00:06:24.166 ]' 00:06:24.166 03:15:11 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:24.166 03:15:11 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:24.166 03:15:11 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:24.166 03:15:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.166 03:15:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.166 [2024-11-21 03:15:11.725934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:24.166 [2024-11-21 03:15:11.726062] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:24.166 [2024-11-21 03:15:11.726103] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:06:24.166 [2024-11-21 03:15:11.726120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:24.166 [2024-11-21 03:15:11.729315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:24.166 [2024-11-21 03:15:11.729368] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:24.425 Passthru0 00:06:24.425 03:15:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.425 03:15:11 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:24.425 03:15:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.425 03:15:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.425 03:15:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.425 03:15:11 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:24.425 { 00:06:24.425 "name": "Malloc0", 00:06:24.425 "aliases": [ 00:06:24.425 "22b22199-6b55-4e62-8eb8-74069ac95497" 00:06:24.425 ], 00:06:24.425 "product_name": "Malloc disk", 00:06:24.425 "block_size": 512, 00:06:24.425 "num_blocks": 16384, 00:06:24.425 "uuid": "22b22199-6b55-4e62-8eb8-74069ac95497", 00:06:24.425 "assigned_rate_limits": { 00:06:24.425 "rw_ios_per_sec": 0, 00:06:24.425 "rw_mbytes_per_sec": 0, 00:06:24.425 "r_mbytes_per_sec": 0, 00:06:24.425 "w_mbytes_per_sec": 0 00:06:24.425 }, 00:06:24.425 "claimed": true, 00:06:24.425 "claim_type": "exclusive_write", 00:06:24.425 "zoned": false, 00:06:24.425 "supported_io_types": { 00:06:24.425 "read": true, 00:06:24.425 "write": true, 00:06:24.425 "unmap": true, 00:06:24.425 "flush": true, 00:06:24.425 "reset": true, 00:06:24.425 "nvme_admin": false, 00:06:24.425 "nvme_io": false, 00:06:24.425 "nvme_io_md": false, 00:06:24.425 "write_zeroes": true, 00:06:24.425 "zcopy": true, 00:06:24.425 "get_zone_info": false, 00:06:24.425 "zone_management": false, 00:06:24.425 "zone_append": false, 00:06:24.425 "compare": false, 00:06:24.425 "compare_and_write": false, 00:06:24.425 "abort": true, 00:06:24.425 "seek_hole": false, 00:06:24.425 "seek_data": false, 00:06:24.425 "copy": true, 00:06:24.425 "nvme_iov_md": false 00:06:24.425 }, 00:06:24.425 "memory_domains": [ 00:06:24.425 { 00:06:24.425 "dma_device_id": "system", 00:06:24.425 "dma_device_type": 1 00:06:24.425 }, 00:06:24.425 { 00:06:24.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:24.425 "dma_device_type": 2 00:06:24.425 } 00:06:24.425 ], 00:06:24.425 "driver_specific": {} 00:06:24.425 }, 00:06:24.425 { 00:06:24.425 "name": "Passthru0", 00:06:24.425 "aliases": [ 00:06:24.425 "6947e35a-b656-5665-bfaa-b330fdc35841" 00:06:24.425 ], 00:06:24.425 "product_name": "passthru", 00:06:24.425 "block_size": 512, 00:06:24.425 "num_blocks": 16384, 00:06:24.425 "uuid": "6947e35a-b656-5665-bfaa-b330fdc35841", 00:06:24.425 "assigned_rate_limits": { 00:06:24.425 "rw_ios_per_sec": 0, 00:06:24.425 "rw_mbytes_per_sec": 0, 00:06:24.425 "r_mbytes_per_sec": 0, 00:06:24.425 "w_mbytes_per_sec": 0 00:06:24.425 }, 00:06:24.425 "claimed": false, 00:06:24.425 "zoned": false, 00:06:24.425 "supported_io_types": { 00:06:24.425 "read": true, 00:06:24.425 "write": true, 00:06:24.425 "unmap": true, 00:06:24.425 "flush": true, 00:06:24.425 "reset": true, 00:06:24.425 "nvme_admin": false, 00:06:24.425 "nvme_io": false, 00:06:24.425 "nvme_io_md": false, 00:06:24.425 "write_zeroes": true, 00:06:24.425 "zcopy": true, 00:06:24.425 "get_zone_info": false, 00:06:24.425 "zone_management": false, 00:06:24.425 "zone_append": false, 00:06:24.425 "compare": false, 00:06:24.425 "compare_and_write": false, 00:06:24.425 "abort": true, 00:06:24.425 "seek_hole": false, 00:06:24.425 "seek_data": false, 00:06:24.425 "copy": true, 00:06:24.425 "nvme_iov_md": false 00:06:24.425 }, 00:06:24.425 "memory_domains": [ 00:06:24.425 { 00:06:24.425 "dma_device_id": "system", 00:06:24.425 "dma_device_type": 1 00:06:24.425 }, 00:06:24.425 { 00:06:24.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:24.425 "dma_device_type": 2 00:06:24.425 } 00:06:24.425 ], 00:06:24.425 "driver_specific": { 00:06:24.425 "passthru": { 00:06:24.425 "name": "Passthru0", 00:06:24.425 "base_bdev_name": "Malloc0" 00:06:24.425 } 00:06:24.425 } 00:06:24.425 } 00:06:24.425 ]' 00:06:24.425 03:15:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:24.425 03:15:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:24.425 03:15:11 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:24.425 03:15:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.425 03:15:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.425 03:15:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.425 03:15:11 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:24.425 03:15:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.425 03:15:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.425 03:15:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.425 03:15:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:24.426 03:15:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.426 03:15:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.426 03:15:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.426 03:15:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:24.426 03:15:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:24.426 03:15:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:24.426 00:06:24.426 real 0m0.329s 00:06:24.426 user 0m0.202s 00:06:24.426 sys 0m0.053s 00:06:24.426 03:15:11 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.426 03:15:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.426 ************************************ 00:06:24.426 END TEST rpc_integrity 00:06:24.426 ************************************ 00:06:24.426 03:15:11 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:24.426 03:15:11 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.426 03:15:11 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.426 03:15:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.426 ************************************ 00:06:24.426 START TEST rpc_plugins 00:06:24.426 ************************************ 00:06:24.426 03:15:11 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:24.426 03:15:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:24.426 03:15:11 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.426 03:15:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:24.426 03:15:11 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.426 03:15:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:24.426 03:15:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:24.426 03:15:11 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.426 03:15:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:24.685 03:15:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.685 03:15:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:24.685 { 00:06:24.685 "name": "Malloc1", 00:06:24.685 "aliases": [ 00:06:24.685 "6ec2c17c-ad03-49c3-bcf2-9045831a1246" 00:06:24.685 ], 00:06:24.685 "product_name": "Malloc disk", 00:06:24.685 "block_size": 4096, 00:06:24.685 "num_blocks": 256, 00:06:24.685 "uuid": "6ec2c17c-ad03-49c3-bcf2-9045831a1246", 00:06:24.685 "assigned_rate_limits": { 00:06:24.685 "rw_ios_per_sec": 0, 00:06:24.685 "rw_mbytes_per_sec": 0, 00:06:24.685 "r_mbytes_per_sec": 0, 00:06:24.685 "w_mbytes_per_sec": 0 00:06:24.685 }, 00:06:24.685 "claimed": false, 00:06:24.685 "zoned": false, 00:06:24.685 "supported_io_types": { 00:06:24.685 "read": true, 00:06:24.685 "write": true, 00:06:24.685 "unmap": true, 00:06:24.685 "flush": true, 00:06:24.685 "reset": true, 00:06:24.685 "nvme_admin": false, 00:06:24.685 "nvme_io": false, 00:06:24.685 "nvme_io_md": false, 00:06:24.685 "write_zeroes": true, 00:06:24.685 "zcopy": true, 00:06:24.685 "get_zone_info": false, 00:06:24.685 "zone_management": false, 00:06:24.685 "zone_append": false, 00:06:24.685 "compare": false, 00:06:24.685 "compare_and_write": false, 00:06:24.685 "abort": true, 00:06:24.685 "seek_hole": false, 00:06:24.685 "seek_data": false, 00:06:24.685 "copy": true, 00:06:24.685 "nvme_iov_md": false 00:06:24.685 }, 00:06:24.685 "memory_domains": [ 00:06:24.685 { 00:06:24.685 "dma_device_id": "system", 00:06:24.685 "dma_device_type": 1 00:06:24.685 }, 00:06:24.685 { 00:06:24.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:24.685 "dma_device_type": 2 00:06:24.685 } 00:06:24.685 ], 00:06:24.685 "driver_specific": {} 00:06:24.685 } 00:06:24.685 ]' 00:06:24.685 03:15:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:24.685 03:15:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:24.685 03:15:12 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:24.685 03:15:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.685 03:15:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:24.685 03:15:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.685 03:15:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:24.685 03:15:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.685 03:15:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:24.685 03:15:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.685 03:15:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:24.685 03:15:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:24.685 03:15:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:24.685 00:06:24.685 real 0m0.161s 00:06:24.685 user 0m0.089s 00:06:24.685 sys 0m0.029s 00:06:24.685 03:15:12 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.685 03:15:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:24.685 ************************************ 00:06:24.685 END TEST rpc_plugins 00:06:24.685 ************************************ 00:06:24.685 03:15:12 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:24.685 03:15:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.685 03:15:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.685 03:15:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.685 ************************************ 00:06:24.685 START TEST rpc_trace_cmd_test 00:06:24.685 ************************************ 00:06:24.685 03:15:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:24.685 03:15:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:24.685 03:15:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:24.685 03:15:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.685 03:15:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.685 03:15:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.685 03:15:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:24.685 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid71033", 00:06:24.685 "tpoint_group_mask": "0x8", 00:06:24.685 "iscsi_conn": { 00:06:24.685 "mask": "0x2", 00:06:24.685 "tpoint_mask": "0x0" 00:06:24.685 }, 00:06:24.685 "scsi": { 00:06:24.685 "mask": "0x4", 00:06:24.685 "tpoint_mask": "0x0" 00:06:24.685 }, 00:06:24.685 "bdev": { 00:06:24.685 "mask": "0x8", 00:06:24.685 "tpoint_mask": "0xffffffffffffffff" 00:06:24.685 }, 00:06:24.685 "nvmf_rdma": { 00:06:24.685 "mask": "0x10", 00:06:24.685 "tpoint_mask": "0x0" 00:06:24.685 }, 00:06:24.685 "nvmf_tcp": { 00:06:24.685 "mask": "0x20", 00:06:24.685 "tpoint_mask": "0x0" 00:06:24.685 }, 00:06:24.685 "ftl": { 00:06:24.685 "mask": "0x40", 00:06:24.685 "tpoint_mask": "0x0" 00:06:24.685 }, 00:06:24.685 "blobfs": { 00:06:24.685 "mask": "0x80", 00:06:24.685 "tpoint_mask": "0x0" 00:06:24.685 }, 00:06:24.685 "dsa": { 00:06:24.685 "mask": "0x200", 00:06:24.685 "tpoint_mask": "0x0" 00:06:24.685 }, 00:06:24.685 "thread": { 00:06:24.685 "mask": "0x400", 00:06:24.685 "tpoint_mask": "0x0" 00:06:24.685 }, 00:06:24.685 "nvme_pcie": { 00:06:24.685 "mask": "0x800", 00:06:24.685 "tpoint_mask": "0x0" 00:06:24.685 }, 00:06:24.685 "iaa": { 00:06:24.685 "mask": "0x1000", 00:06:24.685 "tpoint_mask": "0x0" 00:06:24.685 }, 00:06:24.685 "nvme_tcp": { 00:06:24.685 "mask": "0x2000", 00:06:24.685 "tpoint_mask": "0x0" 00:06:24.685 }, 00:06:24.685 "bdev_nvme": { 00:06:24.685 "mask": "0x4000", 00:06:24.685 "tpoint_mask": "0x0" 00:06:24.685 }, 00:06:24.685 "sock": { 00:06:24.685 "mask": "0x8000", 00:06:24.685 "tpoint_mask": "0x0" 00:06:24.685 }, 00:06:24.685 "blob": { 00:06:24.685 "mask": "0x10000", 00:06:24.685 "tpoint_mask": "0x0" 00:06:24.685 }, 00:06:24.685 "bdev_raid": { 00:06:24.685 "mask": "0x20000", 00:06:24.685 "tpoint_mask": "0x0" 00:06:24.685 }, 00:06:24.685 "scheduler": { 00:06:24.685 "mask": "0x40000", 00:06:24.685 "tpoint_mask": "0x0" 00:06:24.685 } 00:06:24.685 }' 00:06:24.685 03:15:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:24.945 03:15:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:24.945 03:15:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:24.945 03:15:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:24.945 03:15:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:24.945 03:15:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:24.945 03:15:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:24.945 03:15:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:24.945 03:15:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:24.945 03:15:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:24.945 00:06:24.945 real 0m0.240s 00:06:24.945 user 0m0.193s 00:06:24.945 sys 0m0.039s 00:06:24.945 03:15:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.945 03:15:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.945 ************************************ 00:06:24.945 END TEST rpc_trace_cmd_test 00:06:24.945 ************************************ 00:06:24.945 03:15:12 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:24.945 03:15:12 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:24.945 03:15:12 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:24.945 03:15:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.945 03:15:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.945 03:15:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.945 ************************************ 00:06:24.945 START TEST rpc_daemon_integrity 00:06:24.945 ************************************ 00:06:24.945 03:15:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:24.945 03:15:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:24.945 03:15:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.945 03:15:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.945 03:15:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.945 03:15:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:24.945 03:15:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:25.204 03:15:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:25.204 03:15:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:25.204 03:15:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.204 03:15:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.204 03:15:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.204 03:15:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:25.204 03:15:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:25.204 03:15:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.204 03:15:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.204 03:15:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.204 03:15:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:25.204 { 00:06:25.204 "name": "Malloc2", 00:06:25.204 "aliases": [ 00:06:25.204 "cd81f355-bfdc-4c5d-a8a0-8a09ae560f59" 00:06:25.204 ], 00:06:25.204 "product_name": "Malloc disk", 00:06:25.204 "block_size": 512, 00:06:25.204 "num_blocks": 16384, 00:06:25.204 "uuid": "cd81f355-bfdc-4c5d-a8a0-8a09ae560f59", 00:06:25.204 "assigned_rate_limits": { 00:06:25.204 "rw_ios_per_sec": 0, 00:06:25.204 "rw_mbytes_per_sec": 0, 00:06:25.204 "r_mbytes_per_sec": 0, 00:06:25.204 "w_mbytes_per_sec": 0 00:06:25.204 }, 00:06:25.204 "claimed": false, 00:06:25.204 "zoned": false, 00:06:25.204 "supported_io_types": { 00:06:25.204 "read": true, 00:06:25.204 "write": true, 00:06:25.204 "unmap": true, 00:06:25.204 "flush": true, 00:06:25.204 "reset": true, 00:06:25.204 "nvme_admin": false, 00:06:25.204 "nvme_io": false, 00:06:25.204 "nvme_io_md": false, 00:06:25.204 "write_zeroes": true, 00:06:25.204 "zcopy": true, 00:06:25.204 "get_zone_info": false, 00:06:25.204 "zone_management": false, 00:06:25.204 "zone_append": false, 00:06:25.204 "compare": false, 00:06:25.204 "compare_and_write": false, 00:06:25.204 "abort": true, 00:06:25.204 "seek_hole": false, 00:06:25.204 "seek_data": false, 00:06:25.204 "copy": true, 00:06:25.204 "nvme_iov_md": false 00:06:25.204 }, 00:06:25.204 "memory_domains": [ 00:06:25.204 { 00:06:25.204 "dma_device_id": "system", 00:06:25.204 "dma_device_type": 1 00:06:25.204 }, 00:06:25.204 { 00:06:25.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:25.204 "dma_device_type": 2 00:06:25.204 } 00:06:25.204 ], 00:06:25.204 "driver_specific": {} 00:06:25.204 } 00:06:25.204 ]' 00:06:25.204 03:15:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:25.204 03:15:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:25.204 03:15:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:25.204 03:15:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.205 03:15:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.205 [2024-11-21 03:15:12.631468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:25.205 [2024-11-21 03:15:12.631575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:25.205 [2024-11-21 03:15:12.631610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:25.205 [2024-11-21 03:15:12.631625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:25.205 [2024-11-21 03:15:12.634758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:25.205 [2024-11-21 03:15:12.634817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:25.205 Passthru0 00:06:25.205 03:15:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.205 03:15:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:25.205 03:15:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.205 03:15:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.205 03:15:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.205 03:15:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:25.205 { 00:06:25.205 "name": "Malloc2", 00:06:25.205 "aliases": [ 00:06:25.205 "cd81f355-bfdc-4c5d-a8a0-8a09ae560f59" 00:06:25.205 ], 00:06:25.205 "product_name": "Malloc disk", 00:06:25.205 "block_size": 512, 00:06:25.205 "num_blocks": 16384, 00:06:25.205 "uuid": "cd81f355-bfdc-4c5d-a8a0-8a09ae560f59", 00:06:25.205 "assigned_rate_limits": { 00:06:25.205 "rw_ios_per_sec": 0, 00:06:25.205 "rw_mbytes_per_sec": 0, 00:06:25.205 "r_mbytes_per_sec": 0, 00:06:25.205 "w_mbytes_per_sec": 0 00:06:25.205 }, 00:06:25.205 "claimed": true, 00:06:25.205 "claim_type": "exclusive_write", 00:06:25.205 "zoned": false, 00:06:25.205 "supported_io_types": { 00:06:25.205 "read": true, 00:06:25.205 "write": true, 00:06:25.205 "unmap": true, 00:06:25.205 "flush": true, 00:06:25.205 "reset": true, 00:06:25.205 "nvme_admin": false, 00:06:25.205 "nvme_io": false, 00:06:25.205 "nvme_io_md": false, 00:06:25.205 "write_zeroes": true, 00:06:25.205 "zcopy": true, 00:06:25.205 "get_zone_info": false, 00:06:25.205 "zone_management": false, 00:06:25.205 "zone_append": false, 00:06:25.205 "compare": false, 00:06:25.205 "compare_and_write": false, 00:06:25.205 "abort": true, 00:06:25.205 "seek_hole": false, 00:06:25.205 "seek_data": false, 00:06:25.205 "copy": true, 00:06:25.205 "nvme_iov_md": false 00:06:25.205 }, 00:06:25.205 "memory_domains": [ 00:06:25.205 { 00:06:25.205 "dma_device_id": "system", 00:06:25.205 "dma_device_type": 1 00:06:25.205 }, 00:06:25.205 { 00:06:25.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:25.205 "dma_device_type": 2 00:06:25.205 } 00:06:25.205 ], 00:06:25.205 "driver_specific": {} 00:06:25.205 }, 00:06:25.205 { 00:06:25.205 "name": "Passthru0", 00:06:25.205 "aliases": [ 00:06:25.205 "e5b5c536-268b-5bfa-a6b8-60b5a6c03be6" 00:06:25.205 ], 00:06:25.205 "product_name": "passthru", 00:06:25.205 "block_size": 512, 00:06:25.205 "num_blocks": 16384, 00:06:25.205 "uuid": "e5b5c536-268b-5bfa-a6b8-60b5a6c03be6", 00:06:25.205 "assigned_rate_limits": { 00:06:25.205 "rw_ios_per_sec": 0, 00:06:25.205 "rw_mbytes_per_sec": 0, 00:06:25.205 "r_mbytes_per_sec": 0, 00:06:25.205 "w_mbytes_per_sec": 0 00:06:25.205 }, 00:06:25.205 "claimed": false, 00:06:25.205 "zoned": false, 00:06:25.205 "supported_io_types": { 00:06:25.205 "read": true, 00:06:25.205 "write": true, 00:06:25.205 "unmap": true, 00:06:25.205 "flush": true, 00:06:25.205 "reset": true, 00:06:25.205 "nvme_admin": false, 00:06:25.205 "nvme_io": false, 00:06:25.205 "nvme_io_md": false, 00:06:25.205 "write_zeroes": true, 00:06:25.205 "zcopy": true, 00:06:25.205 "get_zone_info": false, 00:06:25.205 "zone_management": false, 00:06:25.205 "zone_append": false, 00:06:25.205 "compare": false, 00:06:25.205 "compare_and_write": false, 00:06:25.205 "abort": true, 00:06:25.205 "seek_hole": false, 00:06:25.205 "seek_data": false, 00:06:25.205 "copy": true, 00:06:25.205 "nvme_iov_md": false 00:06:25.205 }, 00:06:25.205 "memory_domains": [ 00:06:25.205 { 00:06:25.205 "dma_device_id": "system", 00:06:25.205 "dma_device_type": 1 00:06:25.205 }, 00:06:25.205 { 00:06:25.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:25.205 "dma_device_type": 2 00:06:25.205 } 00:06:25.205 ], 00:06:25.205 "driver_specific": { 00:06:25.205 "passthru": { 00:06:25.205 "name": "Passthru0", 00:06:25.205 "base_bdev_name": "Malloc2" 00:06:25.205 } 00:06:25.205 } 00:06:25.205 } 00:06:25.205 ]' 00:06:25.205 03:15:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:25.205 03:15:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:25.205 03:15:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:25.205 03:15:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.205 03:15:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.205 03:15:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.205 03:15:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:25.205 03:15:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.205 03:15:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.205 03:15:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.205 03:15:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:25.205 03:15:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.205 03:15:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.205 03:15:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.205 03:15:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:25.205 03:15:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:25.464 03:15:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:25.464 00:06:25.464 real 0m0.319s 00:06:25.464 user 0m0.189s 00:06:25.464 sys 0m0.055s 00:06:25.464 03:15:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.464 03:15:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.464 ************************************ 00:06:25.464 END TEST rpc_daemon_integrity 00:06:25.464 ************************************ 00:06:25.464 03:15:12 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:25.464 03:15:12 rpc -- rpc/rpc.sh@84 -- # killprocess 71033 00:06:25.464 03:15:12 rpc -- common/autotest_common.sh@954 -- # '[' -z 71033 ']' 00:06:25.464 03:15:12 rpc -- common/autotest_common.sh@958 -- # kill -0 71033 00:06:25.464 03:15:12 rpc -- common/autotest_common.sh@959 -- # uname 00:06:25.464 03:15:12 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.464 03:15:12 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71033 00:06:25.464 03:15:12 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.464 03:15:12 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.464 03:15:12 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71033' 00:06:25.464 killing process with pid 71033 00:06:25.464 03:15:12 rpc -- common/autotest_common.sh@973 -- # kill 71033 00:06:25.464 03:15:12 rpc -- common/autotest_common.sh@978 -- # wait 71033 00:06:26.032 00:06:26.032 real 0m3.177s 00:06:26.032 user 0m3.670s 00:06:26.032 sys 0m1.025s 00:06:26.032 03:15:13 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.032 03:15:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.032 ************************************ 00:06:26.032 END TEST rpc 00:06:26.032 ************************************ 00:06:26.032 03:15:13 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:26.032 03:15:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.032 03:15:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.032 03:15:13 -- common/autotest_common.sh@10 -- # set +x 00:06:26.032 ************************************ 00:06:26.032 START TEST skip_rpc 00:06:26.032 ************************************ 00:06:26.032 03:15:13 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:26.292 * Looking for test storage... 00:06:26.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:26.292 03:15:13 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:26.292 03:15:13 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:26.292 03:15:13 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:26.292 03:15:13 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:26.292 03:15:13 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.292 03:15:13 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.292 03:15:13 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.292 03:15:13 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.292 03:15:13 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.292 03:15:13 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.292 03:15:13 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.292 03:15:13 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.292 03:15:13 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.292 03:15:13 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.292 03:15:13 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.292 03:15:13 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:26.292 03:15:13 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:26.292 03:15:13 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.292 03:15:13 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.292 03:15:13 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:26.292 03:15:13 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:26.292 03:15:13 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.292 03:15:13 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:26.292 03:15:13 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.292 03:15:13 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:26.292 03:15:13 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:26.292 03:15:13 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.292 03:15:13 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:26.292 03:15:13 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.292 03:15:13 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.292 03:15:13 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.292 03:15:13 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:26.292 03:15:13 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.292 03:15:13 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:26.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.292 --rc genhtml_branch_coverage=1 00:06:26.292 --rc genhtml_function_coverage=1 00:06:26.292 --rc genhtml_legend=1 00:06:26.292 --rc geninfo_all_blocks=1 00:06:26.292 --rc geninfo_unexecuted_blocks=1 00:06:26.292 00:06:26.292 ' 00:06:26.292 03:15:13 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:26.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.292 --rc genhtml_branch_coverage=1 00:06:26.292 --rc genhtml_function_coverage=1 00:06:26.292 --rc genhtml_legend=1 00:06:26.292 --rc geninfo_all_blocks=1 00:06:26.292 --rc geninfo_unexecuted_blocks=1 00:06:26.292 00:06:26.292 ' 00:06:26.292 03:15:13 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:26.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.292 --rc genhtml_branch_coverage=1 00:06:26.292 --rc genhtml_function_coverage=1 00:06:26.292 --rc genhtml_legend=1 00:06:26.292 --rc geninfo_all_blocks=1 00:06:26.292 --rc geninfo_unexecuted_blocks=1 00:06:26.292 00:06:26.292 ' 00:06:26.292 03:15:13 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:26.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.292 --rc genhtml_branch_coverage=1 00:06:26.292 --rc genhtml_function_coverage=1 00:06:26.292 --rc genhtml_legend=1 00:06:26.292 --rc geninfo_all_blocks=1 00:06:26.292 --rc geninfo_unexecuted_blocks=1 00:06:26.292 00:06:26.292 ' 00:06:26.292 03:15:13 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:26.292 03:15:13 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:26.292 03:15:13 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:26.292 03:15:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.292 03:15:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.292 03:15:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.292 ************************************ 00:06:26.292 START TEST skip_rpc 00:06:26.292 ************************************ 00:06:26.292 03:15:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:26.292 03:15:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=71240 00:06:26.292 03:15:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:26.292 03:15:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:26.292 03:15:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:26.551 [2024-11-21 03:15:13.952968] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:26.551 [2024-11-21 03:15:13.953282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71240 ] 00:06:26.551 [2024-11-21 03:15:14.101766] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:26.810 [2024-11-21 03:15:14.139246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.810 [2024-11-21 03:15:14.183052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.136 03:15:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:32.136 03:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:32.136 03:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:32.136 03:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:32.136 03:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.136 03:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:32.136 03:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.136 03:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:32.136 03:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.136 03:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.136 03:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:32.136 03:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:32.136 03:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:32.136 03:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:32.136 03:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:32.136 03:15:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:32.136 03:15:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 71240 00:06:32.136 03:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 71240 ']' 00:06:32.136 03:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 71240 00:06:32.136 03:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:32.136 03:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.136 03:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71240 00:06:32.136 03:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:32.136 03:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:32.136 03:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71240' 00:06:32.136 killing process with pid 71240 00:06:32.136 03:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 71240 00:06:32.136 03:15:18 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 71240 00:06:32.136 00:06:32.136 real 0m5.673s 00:06:32.136 user 0m5.101s 00:06:32.136 sys 0m0.484s 00:06:32.136 03:15:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.136 03:15:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.136 ************************************ 00:06:32.136 END TEST skip_rpc 00:06:32.136 ************************************ 00:06:32.136 03:15:19 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:32.136 03:15:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.136 03:15:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.136 03:15:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.136 ************************************ 00:06:32.136 START TEST skip_rpc_with_json 00:06:32.136 ************************************ 00:06:32.136 03:15:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:32.136 03:15:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:32.136 03:15:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=71333 00:06:32.136 03:15:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:32.136 03:15:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:32.136 03:15:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 71333 00:06:32.136 03:15:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 71333 ']' 00:06:32.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.136 03:15:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.136 03:15:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.136 03:15:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.136 03:15:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.136 03:15:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:32.136 [2024-11-21 03:15:19.672790] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:32.136 [2024-11-21 03:15:19.672991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71333 ] 00:06:32.395 [2024-11-21 03:15:19.827789] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:32.395 [2024-11-21 03:15:19.863657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.395 [2024-11-21 03:15:19.908917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.334 03:15:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.334 03:15:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:33.334 03:15:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:33.334 03:15:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.334 03:15:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:33.334 [2024-11-21 03:15:20.578613] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:33.334 request: 00:06:33.334 { 00:06:33.334 "trtype": "tcp", 00:06:33.334 "method": "nvmf_get_transports", 00:06:33.334 "req_id": 1 00:06:33.334 } 00:06:33.334 Got JSON-RPC error response 00:06:33.334 response: 00:06:33.334 { 00:06:33.334 "code": -19, 00:06:33.334 "message": "No such device" 00:06:33.334 } 00:06:33.334 03:15:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:33.334 03:15:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:33.334 03:15:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.334 03:15:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:33.334 [2024-11-21 03:15:20.590771] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.334 03:15:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.334 03:15:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:33.334 03:15:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.334 03:15:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:33.334 03:15:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.334 03:15:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:33.334 { 00:06:33.334 "subsystems": [ 00:06:33.334 { 00:06:33.334 "subsystem": "fsdev", 00:06:33.334 "config": [ 00:06:33.334 { 00:06:33.334 "method": "fsdev_set_opts", 00:06:33.334 "params": { 00:06:33.334 "fsdev_io_pool_size": 65535, 00:06:33.334 "fsdev_io_cache_size": 256 00:06:33.334 } 00:06:33.334 } 00:06:33.334 ] 00:06:33.334 }, 00:06:33.334 { 00:06:33.334 "subsystem": "keyring", 00:06:33.334 "config": [] 00:06:33.334 }, 00:06:33.334 { 00:06:33.334 "subsystem": "iobuf", 00:06:33.334 "config": [ 00:06:33.334 { 00:06:33.334 "method": "iobuf_set_options", 00:06:33.334 "params": { 00:06:33.334 "small_pool_count": 8192, 00:06:33.334 "large_pool_count": 1024, 00:06:33.334 "small_bufsize": 8192, 00:06:33.334 "large_bufsize": 135168, 00:06:33.334 "enable_numa": false 00:06:33.334 } 00:06:33.334 } 00:06:33.334 ] 00:06:33.334 }, 00:06:33.334 { 00:06:33.334 "subsystem": "sock", 00:06:33.334 "config": [ 00:06:33.334 { 00:06:33.334 "method": "sock_set_default_impl", 00:06:33.334 "params": { 00:06:33.334 "impl_name": "posix" 00:06:33.334 } 00:06:33.334 }, 00:06:33.334 { 00:06:33.334 "method": "sock_impl_set_options", 00:06:33.334 "params": { 00:06:33.334 "impl_name": "ssl", 00:06:33.334 "recv_buf_size": 4096, 00:06:33.334 "send_buf_size": 4096, 00:06:33.334 "enable_recv_pipe": true, 00:06:33.334 "enable_quickack": false, 00:06:33.334 "enable_placement_id": 0, 00:06:33.334 "enable_zerocopy_send_server": true, 00:06:33.334 "enable_zerocopy_send_client": false, 00:06:33.334 "zerocopy_threshold": 0, 00:06:33.334 "tls_version": 0, 00:06:33.334 "enable_ktls": false 00:06:33.334 } 00:06:33.334 }, 00:06:33.334 { 00:06:33.334 "method": "sock_impl_set_options", 00:06:33.334 "params": { 00:06:33.334 "impl_name": "posix", 00:06:33.334 "recv_buf_size": 2097152, 00:06:33.334 "send_buf_size": 2097152, 00:06:33.334 "enable_recv_pipe": true, 00:06:33.334 "enable_quickack": false, 00:06:33.334 "enable_placement_id": 0, 00:06:33.334 "enable_zerocopy_send_server": true, 00:06:33.334 "enable_zerocopy_send_client": false, 00:06:33.334 "zerocopy_threshold": 0, 00:06:33.334 "tls_version": 0, 00:06:33.334 "enable_ktls": false 00:06:33.334 } 00:06:33.334 } 00:06:33.334 ] 00:06:33.334 }, 00:06:33.334 { 00:06:33.334 "subsystem": "vmd", 00:06:33.334 "config": [] 00:06:33.334 }, 00:06:33.334 { 00:06:33.334 "subsystem": "accel", 00:06:33.334 "config": [ 00:06:33.334 { 00:06:33.334 "method": "accel_set_options", 00:06:33.334 "params": { 00:06:33.334 "small_cache_size": 128, 00:06:33.334 "large_cache_size": 16, 00:06:33.334 "task_count": 2048, 00:06:33.335 "sequence_count": 2048, 00:06:33.335 "buf_count": 2048 00:06:33.335 } 00:06:33.335 } 00:06:33.335 ] 00:06:33.335 }, 00:06:33.335 { 00:06:33.335 "subsystem": "bdev", 00:06:33.335 "config": [ 00:06:33.335 { 00:06:33.335 "method": "bdev_set_options", 00:06:33.335 "params": { 00:06:33.335 "bdev_io_pool_size": 65535, 00:06:33.335 "bdev_io_cache_size": 256, 00:06:33.335 "bdev_auto_examine": true, 00:06:33.335 "iobuf_small_cache_size": 128, 00:06:33.335 "iobuf_large_cache_size": 16 00:06:33.335 } 00:06:33.335 }, 00:06:33.335 { 00:06:33.335 "method": "bdev_raid_set_options", 00:06:33.335 "params": { 00:06:33.335 "process_window_size_kb": 1024, 00:06:33.335 "process_max_bandwidth_mb_sec": 0 00:06:33.335 } 00:06:33.335 }, 00:06:33.335 { 00:06:33.335 "method": "bdev_iscsi_set_options", 00:06:33.335 "params": { 00:06:33.335 "timeout_sec": 30 00:06:33.335 } 00:06:33.335 }, 00:06:33.335 { 00:06:33.335 "method": "bdev_nvme_set_options", 00:06:33.335 "params": { 00:06:33.335 "action_on_timeout": "none", 00:06:33.335 "timeout_us": 0, 00:06:33.335 "timeout_admin_us": 0, 00:06:33.335 "keep_alive_timeout_ms": 10000, 00:06:33.335 "arbitration_burst": 0, 00:06:33.335 "low_priority_weight": 0, 00:06:33.335 "medium_priority_weight": 0, 00:06:33.335 "high_priority_weight": 0, 00:06:33.335 "nvme_adminq_poll_period_us": 10000, 00:06:33.335 "nvme_ioq_poll_period_us": 0, 00:06:33.335 "io_queue_requests": 0, 00:06:33.335 "delay_cmd_submit": true, 00:06:33.335 "transport_retry_count": 4, 00:06:33.335 "bdev_retry_count": 3, 00:06:33.335 "transport_ack_timeout": 0, 00:06:33.335 "ctrlr_loss_timeout_sec": 0, 00:06:33.335 "reconnect_delay_sec": 0, 00:06:33.335 "fast_io_fail_timeout_sec": 0, 00:06:33.335 "disable_auto_failback": false, 00:06:33.335 "generate_uuids": false, 00:06:33.335 "transport_tos": 0, 00:06:33.335 "nvme_error_stat": false, 00:06:33.335 "rdma_srq_size": 0, 00:06:33.335 "io_path_stat": false, 00:06:33.335 "allow_accel_sequence": false, 00:06:33.335 "rdma_max_cq_size": 0, 00:06:33.335 "rdma_cm_event_timeout_ms": 0, 00:06:33.335 "dhchap_digests": [ 00:06:33.335 "sha256", 00:06:33.335 "sha384", 00:06:33.335 "sha512" 00:06:33.335 ], 00:06:33.335 "dhchap_dhgroups": [ 00:06:33.335 "null", 00:06:33.335 "ffdhe2048", 00:06:33.335 "ffdhe3072", 00:06:33.335 "ffdhe4096", 00:06:33.335 "ffdhe6144", 00:06:33.335 "ffdhe8192" 00:06:33.335 ] 00:06:33.335 } 00:06:33.335 }, 00:06:33.335 { 00:06:33.335 "method": "bdev_nvme_set_hotplug", 00:06:33.335 "params": { 00:06:33.335 "period_us": 100000, 00:06:33.335 "enable": false 00:06:33.335 } 00:06:33.335 }, 00:06:33.335 { 00:06:33.335 "method": "bdev_wait_for_examine" 00:06:33.335 } 00:06:33.335 ] 00:06:33.335 }, 00:06:33.335 { 00:06:33.335 "subsystem": "scsi", 00:06:33.335 "config": null 00:06:33.335 }, 00:06:33.335 { 00:06:33.335 "subsystem": "scheduler", 00:06:33.335 "config": [ 00:06:33.335 { 00:06:33.335 "method": "framework_set_scheduler", 00:06:33.335 "params": { 00:06:33.335 "name": "static" 00:06:33.335 } 00:06:33.335 } 00:06:33.335 ] 00:06:33.335 }, 00:06:33.335 { 00:06:33.335 "subsystem": "vhost_scsi", 00:06:33.335 "config": [] 00:06:33.335 }, 00:06:33.335 { 00:06:33.335 "subsystem": "vhost_blk", 00:06:33.335 "config": [] 00:06:33.335 }, 00:06:33.335 { 00:06:33.335 "subsystem": "ublk", 00:06:33.335 "config": [] 00:06:33.335 }, 00:06:33.335 { 00:06:33.335 "subsystem": "nbd", 00:06:33.335 "config": [] 00:06:33.335 }, 00:06:33.335 { 00:06:33.335 "subsystem": "nvmf", 00:06:33.335 "config": [ 00:06:33.335 { 00:06:33.335 "method": "nvmf_set_config", 00:06:33.335 "params": { 00:06:33.335 "discovery_filter": "match_any", 00:06:33.335 "admin_cmd_passthru": { 00:06:33.335 "identify_ctrlr": false 00:06:33.335 }, 00:06:33.335 "dhchap_digests": [ 00:06:33.335 "sha256", 00:06:33.335 "sha384", 00:06:33.335 "sha512" 00:06:33.335 ], 00:06:33.335 "dhchap_dhgroups": [ 00:06:33.335 "null", 00:06:33.335 "ffdhe2048", 00:06:33.335 "ffdhe3072", 00:06:33.335 "ffdhe4096", 00:06:33.335 "ffdhe6144", 00:06:33.335 "ffdhe8192" 00:06:33.335 ] 00:06:33.335 } 00:06:33.335 }, 00:06:33.335 { 00:06:33.335 "method": "nvmf_set_max_subsystems", 00:06:33.335 "params": { 00:06:33.335 "max_subsystems": 1024 00:06:33.335 } 00:06:33.335 }, 00:06:33.335 { 00:06:33.335 "method": "nvmf_set_crdt", 00:06:33.335 "params": { 00:06:33.335 "crdt1": 0, 00:06:33.335 "crdt2": 0, 00:06:33.335 "crdt3": 0 00:06:33.335 } 00:06:33.335 }, 00:06:33.335 { 00:06:33.335 "method": "nvmf_create_transport", 00:06:33.335 "params": { 00:06:33.335 "trtype": "TCP", 00:06:33.335 "max_queue_depth": 128, 00:06:33.335 "max_io_qpairs_per_ctrlr": 127, 00:06:33.335 "in_capsule_data_size": 4096, 00:06:33.335 "max_io_size": 131072, 00:06:33.335 "io_unit_size": 131072, 00:06:33.335 "max_aq_depth": 128, 00:06:33.335 "num_shared_buffers": 511, 00:06:33.335 "buf_cache_size": 4294967295, 00:06:33.335 "dif_insert_or_strip": false, 00:06:33.335 "zcopy": false, 00:06:33.335 "c2h_success": true, 00:06:33.335 "sock_priority": 0, 00:06:33.335 "abort_timeout_sec": 1, 00:06:33.335 "ack_timeout": 0, 00:06:33.335 "data_wr_pool_size": 0 00:06:33.335 } 00:06:33.335 } 00:06:33.335 ] 00:06:33.335 }, 00:06:33.335 { 00:06:33.335 "subsystem": "iscsi", 00:06:33.335 "config": [ 00:06:33.335 { 00:06:33.335 "method": "iscsi_set_options", 00:06:33.335 "params": { 00:06:33.335 "node_base": "iqn.2016-06.io.spdk", 00:06:33.335 "max_sessions": 128, 00:06:33.335 "max_connections_per_session": 2, 00:06:33.335 "max_queue_depth": 64, 00:06:33.335 "default_time2wait": 2, 00:06:33.335 "default_time2retain": 20, 00:06:33.335 "first_burst_length": 8192, 00:06:33.335 "immediate_data": true, 00:06:33.335 "allow_duplicated_isid": false, 00:06:33.335 "error_recovery_level": 0, 00:06:33.335 "nop_timeout": 60, 00:06:33.335 "nop_in_interval": 30, 00:06:33.335 "disable_chap": false, 00:06:33.335 "require_chap": false, 00:06:33.335 "mutual_chap": false, 00:06:33.335 "chap_group": 0, 00:06:33.335 "max_large_datain_per_connection": 64, 00:06:33.335 "max_r2t_per_connection": 4, 00:06:33.335 "pdu_pool_size": 36864, 00:06:33.335 "immediate_data_pool_size": 16384, 00:06:33.335 "data_out_pool_size": 2048 00:06:33.335 } 00:06:33.335 } 00:06:33.335 ] 00:06:33.335 } 00:06:33.335 ] 00:06:33.335 } 00:06:33.335 03:15:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:33.335 03:15:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 71333 00:06:33.335 03:15:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 71333 ']' 00:06:33.335 03:15:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 71333 00:06:33.335 03:15:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:33.335 03:15:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.335 03:15:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71333 00:06:33.335 killing process with pid 71333 00:06:33.335 03:15:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.335 03:15:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.335 03:15:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71333' 00:06:33.335 03:15:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 71333 00:06:33.335 03:15:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 71333 00:06:33.904 03:15:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=71361 00:06:33.904 03:15:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:33.904 03:15:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:39.189 03:15:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 71361 00:06:39.189 03:15:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 71361 ']' 00:06:39.189 03:15:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 71361 00:06:39.189 03:15:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:39.189 03:15:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.189 03:15:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71361 00:06:39.189 killing process with pid 71361 00:06:39.189 03:15:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.189 03:15:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.189 03:15:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71361' 00:06:39.189 03:15:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 71361 00:06:39.189 03:15:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 71361 00:06:39.760 03:15:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:39.760 03:15:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:39.760 00:06:39.760 real 0m7.509s 00:06:39.760 user 0m6.839s 00:06:39.760 sys 0m1.028s 00:06:39.760 ************************************ 00:06:39.760 END TEST skip_rpc_with_json 00:06:39.760 ************************************ 00:06:39.760 03:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.760 03:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:39.760 03:15:27 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:39.760 03:15:27 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.760 03:15:27 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.760 03:15:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.760 ************************************ 00:06:39.760 START TEST skip_rpc_with_delay 00:06:39.760 ************************************ 00:06:39.760 03:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:39.760 03:15:27 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:39.760 03:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:39.760 03:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:39.760 03:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:39.760 03:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:39.760 03:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:39.760 03:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:39.760 03:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:39.760 03:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:39.760 03:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:39.760 03:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:39.760 03:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:39.760 [2024-11-21 03:15:27.237611] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:39.760 03:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:39.760 ************************************ 00:06:39.760 END TEST skip_rpc_with_delay 00:06:39.760 ************************************ 00:06:39.760 03:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:39.760 03:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:39.760 03:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:39.760 00:06:39.760 real 0m0.191s 00:06:39.760 user 0m0.116s 00:06:39.760 sys 0m0.074s 00:06:39.760 03:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.760 03:15:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:40.021 03:15:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:40.021 03:15:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:40.021 03:15:27 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:40.021 03:15:27 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.021 03:15:27 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.021 03:15:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.021 ************************************ 00:06:40.021 START TEST exit_on_failed_rpc_init 00:06:40.021 ************************************ 00:06:40.021 03:15:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:40.021 03:15:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=71473 00:06:40.021 03:15:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:40.021 03:15:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 71473 00:06:40.021 03:15:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 71473 ']' 00:06:40.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.021 03:15:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.021 03:15:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.021 03:15:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.021 03:15:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.021 03:15:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:40.021 [2024-11-21 03:15:27.496969] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:40.021 [2024-11-21 03:15:27.497129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71473 ] 00:06:40.281 [2024-11-21 03:15:27.633384] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:40.281 [2024-11-21 03:15:27.662008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.281 [2024-11-21 03:15:27.707909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.851 03:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.851 03:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:40.851 03:15:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:40.851 03:15:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:40.851 03:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:40.851 03:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:40.851 03:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:40.851 03:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.851 03:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:40.851 03:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.851 03:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:40.851 03:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.851 03:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:40.851 03:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:40.851 03:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:41.111 [2024-11-21 03:15:28.422076] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:41.111 [2024-11-21 03:15:28.422316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71491 ] 00:06:41.111 [2024-11-21 03:15:28.561060] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:41.111 [2024-11-21 03:15:28.598990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.111 [2024-11-21 03:15:28.629545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.111 [2024-11-21 03:15:28.629743] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:41.111 [2024-11-21 03:15:28.629798] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:41.111 [2024-11-21 03:15:28.629831] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:41.371 03:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:41.371 03:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:41.371 03:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:41.371 03:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:41.371 03:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:41.371 03:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:41.371 03:15:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:41.371 03:15:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 71473 00:06:41.371 03:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 71473 ']' 00:06:41.371 03:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 71473 00:06:41.371 03:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:41.371 03:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.371 03:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71473 00:06:41.371 killing process with pid 71473 00:06:41.371 03:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.371 03:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.371 03:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71473' 00:06:41.371 03:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 71473 00:06:41.371 03:15:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 71473 00:06:41.940 00:06:41.940 real 0m1.987s 00:06:41.940 user 0m1.958s 00:06:41.940 sys 0m0.667s 00:06:41.940 ************************************ 00:06:41.940 END TEST exit_on_failed_rpc_init 00:06:41.940 ************************************ 00:06:41.940 03:15:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.940 03:15:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:41.940 03:15:29 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:41.940 ************************************ 00:06:41.940 END TEST skip_rpc 00:06:41.940 ************************************ 00:06:41.940 00:06:41.941 real 0m15.866s 00:06:41.941 user 0m14.228s 00:06:41.941 sys 0m2.566s 00:06:41.941 03:15:29 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.941 03:15:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.941 03:15:29 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:41.941 03:15:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.941 03:15:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.941 03:15:29 -- common/autotest_common.sh@10 -- # set +x 00:06:42.200 ************************************ 00:06:42.200 START TEST rpc_client 00:06:42.200 ************************************ 00:06:42.200 03:15:29 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:42.200 * Looking for test storage... 00:06:42.200 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:42.200 03:15:29 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:42.200 03:15:29 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:42.200 03:15:29 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:42.200 03:15:29 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:42.200 03:15:29 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.200 03:15:29 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.200 03:15:29 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.200 03:15:29 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.200 03:15:29 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.200 03:15:29 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.200 03:15:29 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.200 03:15:29 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.200 03:15:29 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.200 03:15:29 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.200 03:15:29 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.200 03:15:29 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:42.200 03:15:29 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:42.200 03:15:29 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.200 03:15:29 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.200 03:15:29 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:42.200 03:15:29 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:42.200 03:15:29 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.200 03:15:29 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:42.200 03:15:29 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.200 03:15:29 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:42.200 03:15:29 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:42.200 03:15:29 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.200 03:15:29 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:42.200 03:15:29 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.200 03:15:29 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.200 03:15:29 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.200 03:15:29 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:42.200 03:15:29 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.200 03:15:29 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:42.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.200 --rc genhtml_branch_coverage=1 00:06:42.200 --rc genhtml_function_coverage=1 00:06:42.200 --rc genhtml_legend=1 00:06:42.200 --rc geninfo_all_blocks=1 00:06:42.200 --rc geninfo_unexecuted_blocks=1 00:06:42.200 00:06:42.200 ' 00:06:42.200 03:15:29 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:42.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.200 --rc genhtml_branch_coverage=1 00:06:42.200 --rc genhtml_function_coverage=1 00:06:42.200 --rc genhtml_legend=1 00:06:42.200 --rc geninfo_all_blocks=1 00:06:42.200 --rc geninfo_unexecuted_blocks=1 00:06:42.200 00:06:42.200 ' 00:06:42.200 03:15:29 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:42.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.200 --rc genhtml_branch_coverage=1 00:06:42.200 --rc genhtml_function_coverage=1 00:06:42.200 --rc genhtml_legend=1 00:06:42.200 --rc geninfo_all_blocks=1 00:06:42.200 --rc geninfo_unexecuted_blocks=1 00:06:42.200 00:06:42.200 ' 00:06:42.200 03:15:29 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:42.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.200 --rc genhtml_branch_coverage=1 00:06:42.200 --rc genhtml_function_coverage=1 00:06:42.200 --rc genhtml_legend=1 00:06:42.200 --rc geninfo_all_blocks=1 00:06:42.200 --rc geninfo_unexecuted_blocks=1 00:06:42.200 00:06:42.200 ' 00:06:42.200 03:15:29 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:42.460 OK 00:06:42.460 03:15:29 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:42.460 00:06:42.460 real 0m0.312s 00:06:42.460 user 0m0.177s 00:06:42.460 sys 0m0.152s 00:06:42.460 03:15:29 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.460 03:15:29 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:42.460 ************************************ 00:06:42.460 END TEST rpc_client 00:06:42.460 ************************************ 00:06:42.460 03:15:29 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:42.460 03:15:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.460 03:15:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.460 03:15:29 -- common/autotest_common.sh@10 -- # set +x 00:06:42.460 ************************************ 00:06:42.460 START TEST json_config 00:06:42.460 ************************************ 00:06:42.460 03:15:29 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:42.460 03:15:29 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:42.460 03:15:29 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:42.460 03:15:29 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:42.721 03:15:30 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:42.721 03:15:30 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.721 03:15:30 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.721 03:15:30 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.721 03:15:30 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.721 03:15:30 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.721 03:15:30 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.721 03:15:30 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.721 03:15:30 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.721 03:15:30 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.721 03:15:30 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.721 03:15:30 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.721 03:15:30 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:42.721 03:15:30 json_config -- scripts/common.sh@345 -- # : 1 00:06:42.721 03:15:30 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.721 03:15:30 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.721 03:15:30 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:42.721 03:15:30 json_config -- scripts/common.sh@353 -- # local d=1 00:06:42.721 03:15:30 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.721 03:15:30 json_config -- scripts/common.sh@355 -- # echo 1 00:06:42.721 03:15:30 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.721 03:15:30 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:42.721 03:15:30 json_config -- scripts/common.sh@353 -- # local d=2 00:06:42.721 03:15:30 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.721 03:15:30 json_config -- scripts/common.sh@355 -- # echo 2 00:06:42.721 03:15:30 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.721 03:15:30 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.721 03:15:30 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.721 03:15:30 json_config -- scripts/common.sh@368 -- # return 0 00:06:42.721 03:15:30 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.721 03:15:30 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:42.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.721 --rc genhtml_branch_coverage=1 00:06:42.721 --rc genhtml_function_coverage=1 00:06:42.721 --rc genhtml_legend=1 00:06:42.721 --rc geninfo_all_blocks=1 00:06:42.721 --rc geninfo_unexecuted_blocks=1 00:06:42.721 00:06:42.721 ' 00:06:42.721 03:15:30 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:42.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.721 --rc genhtml_branch_coverage=1 00:06:42.721 --rc genhtml_function_coverage=1 00:06:42.721 --rc genhtml_legend=1 00:06:42.721 --rc geninfo_all_blocks=1 00:06:42.721 --rc geninfo_unexecuted_blocks=1 00:06:42.721 00:06:42.721 ' 00:06:42.721 03:15:30 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:42.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.721 --rc genhtml_branch_coverage=1 00:06:42.721 --rc genhtml_function_coverage=1 00:06:42.721 --rc genhtml_legend=1 00:06:42.721 --rc geninfo_all_blocks=1 00:06:42.721 --rc geninfo_unexecuted_blocks=1 00:06:42.721 00:06:42.721 ' 00:06:42.721 03:15:30 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:42.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.721 --rc genhtml_branch_coverage=1 00:06:42.721 --rc genhtml_function_coverage=1 00:06:42.721 --rc genhtml_legend=1 00:06:42.721 --rc geninfo_all_blocks=1 00:06:42.721 --rc geninfo_unexecuted_blocks=1 00:06:42.721 00:06:42.721 ' 00:06:42.721 03:15:30 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:42.721 03:15:30 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:42.721 03:15:30 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:42.721 03:15:30 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:42.721 03:15:30 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:42.721 03:15:30 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:42.721 03:15:30 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:42.721 03:15:30 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:42.721 03:15:30 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:42.721 03:15:30 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:42.721 03:15:30 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:42.721 03:15:30 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:42.721 03:15:30 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:86a728fc-24bd-4818-abba-33fbc8c192df 00:06:42.721 03:15:30 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=86a728fc-24bd-4818-abba-33fbc8c192df 00:06:42.721 03:15:30 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:42.721 03:15:30 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:42.721 03:15:30 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:42.721 03:15:30 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:42.721 03:15:30 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:42.721 03:15:30 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:42.721 03:15:30 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.721 03:15:30 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.721 03:15:30 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.722 03:15:30 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.722 03:15:30 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.722 03:15:30 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.722 03:15:30 json_config -- paths/export.sh@5 -- # export PATH 00:06:42.722 03:15:30 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.722 03:15:30 json_config -- nvmf/common.sh@51 -- # : 0 00:06:42.722 03:15:30 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:42.722 03:15:30 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:42.722 03:15:30 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:42.722 03:15:30 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:42.722 03:15:30 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:42.722 03:15:30 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:42.722 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:42.722 03:15:30 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:42.722 03:15:30 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:42.722 03:15:30 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:42.722 03:15:30 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:42.722 03:15:30 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:42.722 03:15:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:42.722 03:15:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:42.722 03:15:30 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:42.722 03:15:30 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:42.722 WARNING: No tests are enabled so not running JSON configuration tests 00:06:42.722 03:15:30 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:42.722 00:06:42.722 real 0m0.237s 00:06:42.722 user 0m0.143s 00:06:42.722 sys 0m0.098s 00:06:42.722 03:15:30 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.722 03:15:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:42.722 ************************************ 00:06:42.722 END TEST json_config 00:06:42.722 ************************************ 00:06:42.722 03:15:30 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:42.722 03:15:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.722 03:15:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.722 03:15:30 -- common/autotest_common.sh@10 -- # set +x 00:06:42.722 ************************************ 00:06:42.722 START TEST json_config_extra_key 00:06:42.722 ************************************ 00:06:42.722 03:15:30 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:42.722 03:15:30 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:42.722 03:15:30 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:42.722 03:15:30 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:42.981 03:15:30 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:42.981 03:15:30 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.981 03:15:30 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.981 03:15:30 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.981 03:15:30 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.981 03:15:30 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.981 03:15:30 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.981 03:15:30 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.981 03:15:30 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.981 03:15:30 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.981 03:15:30 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.981 03:15:30 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.981 03:15:30 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:42.981 03:15:30 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:42.981 03:15:30 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.981 03:15:30 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.981 03:15:30 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:42.981 03:15:30 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:42.981 03:15:30 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.981 03:15:30 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:42.981 03:15:30 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.981 03:15:30 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:42.981 03:15:30 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:42.981 03:15:30 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.981 03:15:30 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:42.981 03:15:30 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.981 03:15:30 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.981 03:15:30 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.981 03:15:30 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:42.981 03:15:30 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.981 03:15:30 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:42.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.981 --rc genhtml_branch_coverage=1 00:06:42.981 --rc genhtml_function_coverage=1 00:06:42.981 --rc genhtml_legend=1 00:06:42.981 --rc geninfo_all_blocks=1 00:06:42.981 --rc geninfo_unexecuted_blocks=1 00:06:42.981 00:06:42.981 ' 00:06:42.981 03:15:30 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:42.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.981 --rc genhtml_branch_coverage=1 00:06:42.981 --rc genhtml_function_coverage=1 00:06:42.981 --rc genhtml_legend=1 00:06:42.981 --rc geninfo_all_blocks=1 00:06:42.981 --rc geninfo_unexecuted_blocks=1 00:06:42.981 00:06:42.981 ' 00:06:42.981 03:15:30 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:42.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.981 --rc genhtml_branch_coverage=1 00:06:42.981 --rc genhtml_function_coverage=1 00:06:42.981 --rc genhtml_legend=1 00:06:42.981 --rc geninfo_all_blocks=1 00:06:42.981 --rc geninfo_unexecuted_blocks=1 00:06:42.981 00:06:42.981 ' 00:06:42.981 03:15:30 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:42.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.981 --rc genhtml_branch_coverage=1 00:06:42.981 --rc genhtml_function_coverage=1 00:06:42.981 --rc genhtml_legend=1 00:06:42.981 --rc geninfo_all_blocks=1 00:06:42.981 --rc geninfo_unexecuted_blocks=1 00:06:42.981 00:06:42.981 ' 00:06:42.981 03:15:30 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:42.981 03:15:30 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:42.981 03:15:30 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:42.981 03:15:30 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:42.981 03:15:30 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:42.981 03:15:30 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:42.981 03:15:30 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:42.981 03:15:30 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:42.982 03:15:30 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:42.982 03:15:30 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:42.982 03:15:30 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:42.982 03:15:30 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:42.982 03:15:30 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:86a728fc-24bd-4818-abba-33fbc8c192df 00:06:42.982 03:15:30 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=86a728fc-24bd-4818-abba-33fbc8c192df 00:06:42.982 03:15:30 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:42.982 03:15:30 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:42.982 03:15:30 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:42.982 03:15:30 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:42.982 03:15:30 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:42.982 03:15:30 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:42.982 03:15:30 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.982 03:15:30 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.982 03:15:30 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.982 03:15:30 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.982 03:15:30 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.982 03:15:30 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.982 03:15:30 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:42.982 03:15:30 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.982 03:15:30 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:42.982 03:15:30 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:42.982 03:15:30 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:42.982 03:15:30 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:42.982 03:15:30 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:42.982 03:15:30 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:42.982 03:15:30 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:42.982 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:42.982 03:15:30 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:42.982 03:15:30 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:42.982 03:15:30 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:42.982 03:15:30 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:42.982 03:15:30 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:42.982 03:15:30 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:42.982 03:15:30 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:42.982 03:15:30 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:42.982 03:15:30 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:42.982 03:15:30 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:42.982 03:15:30 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:42.982 03:15:30 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:42.982 03:15:30 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:42.982 03:15:30 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:42.982 INFO: launching applications... 00:06:42.982 03:15:30 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:42.982 03:15:30 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:42.982 03:15:30 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:42.982 03:15:30 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:42.982 03:15:30 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:42.982 03:15:30 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:42.982 03:15:30 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:42.982 03:15:30 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:42.982 03:15:30 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=71679 00:06:42.982 03:15:30 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:42.982 Waiting for target to run... 00:06:42.982 03:15:30 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 71679 /var/tmp/spdk_tgt.sock 00:06:42.982 03:15:30 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:42.982 03:15:30 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 71679 ']' 00:06:42.982 03:15:30 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:42.982 03:15:30 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.982 03:15:30 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:42.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:42.982 03:15:30 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.982 03:15:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:42.982 [2024-11-21 03:15:30.514979] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:42.982 [2024-11-21 03:15:30.515132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71679 ] 00:06:43.550 [2024-11-21 03:15:30.867734] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:43.550 [2024-11-21 03:15:30.906333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.550 [2024-11-21 03:15:30.932136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.118 00:06:44.118 INFO: shutting down applications... 00:06:44.118 03:15:31 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.118 03:15:31 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:44.118 03:15:31 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:44.118 03:15:31 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:44.118 03:15:31 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:44.118 03:15:31 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:44.119 03:15:31 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:44.119 03:15:31 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 71679 ]] 00:06:44.119 03:15:31 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 71679 00:06:44.119 03:15:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:44.119 03:15:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:44.119 03:15:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71679 00:06:44.119 03:15:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:44.379 03:15:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:44.379 03:15:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:44.379 03:15:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71679 00:06:44.379 03:15:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:44.947 03:15:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:44.947 03:15:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:44.947 03:15:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71679 00:06:44.947 03:15:32 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:44.947 03:15:32 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:44.947 03:15:32 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:44.947 03:15:32 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:44.947 SPDK target shutdown done 00:06:44.947 03:15:32 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:44.947 Success 00:06:44.947 00:06:44.947 real 0m2.268s 00:06:44.947 user 0m1.850s 00:06:44.947 sys 0m0.489s 00:06:44.947 ************************************ 00:06:44.947 END TEST json_config_extra_key 00:06:44.948 ************************************ 00:06:44.948 03:15:32 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.948 03:15:32 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:44.948 03:15:32 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:44.948 03:15:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.948 03:15:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.948 03:15:32 -- common/autotest_common.sh@10 -- # set +x 00:06:45.207 ************************************ 00:06:45.207 START TEST alias_rpc 00:06:45.207 ************************************ 00:06:45.207 03:15:32 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:45.207 * Looking for test storage... 00:06:45.207 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:45.207 03:15:32 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:45.207 03:15:32 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:45.207 03:15:32 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:45.207 03:15:32 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:45.207 03:15:32 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.207 03:15:32 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.207 03:15:32 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.207 03:15:32 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.207 03:15:32 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.207 03:15:32 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.207 03:15:32 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.207 03:15:32 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.207 03:15:32 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.207 03:15:32 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.207 03:15:32 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.207 03:15:32 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:45.207 03:15:32 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:45.207 03:15:32 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.207 03:15:32 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.207 03:15:32 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:45.207 03:15:32 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:45.207 03:15:32 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.207 03:15:32 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:45.207 03:15:32 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.207 03:15:32 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:45.207 03:15:32 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:45.207 03:15:32 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.207 03:15:32 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:45.207 03:15:32 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.207 03:15:32 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.208 03:15:32 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.208 03:15:32 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:45.208 03:15:32 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.208 03:15:32 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:45.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.208 --rc genhtml_branch_coverage=1 00:06:45.208 --rc genhtml_function_coverage=1 00:06:45.208 --rc genhtml_legend=1 00:06:45.208 --rc geninfo_all_blocks=1 00:06:45.208 --rc geninfo_unexecuted_blocks=1 00:06:45.208 00:06:45.208 ' 00:06:45.208 03:15:32 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:45.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.208 --rc genhtml_branch_coverage=1 00:06:45.208 --rc genhtml_function_coverage=1 00:06:45.208 --rc genhtml_legend=1 00:06:45.208 --rc geninfo_all_blocks=1 00:06:45.208 --rc geninfo_unexecuted_blocks=1 00:06:45.208 00:06:45.208 ' 00:06:45.208 03:15:32 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:45.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.208 --rc genhtml_branch_coverage=1 00:06:45.208 --rc genhtml_function_coverage=1 00:06:45.208 --rc genhtml_legend=1 00:06:45.208 --rc geninfo_all_blocks=1 00:06:45.208 --rc geninfo_unexecuted_blocks=1 00:06:45.208 00:06:45.208 ' 00:06:45.208 03:15:32 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:45.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.208 --rc genhtml_branch_coverage=1 00:06:45.208 --rc genhtml_function_coverage=1 00:06:45.208 --rc genhtml_legend=1 00:06:45.208 --rc geninfo_all_blocks=1 00:06:45.208 --rc geninfo_unexecuted_blocks=1 00:06:45.208 00:06:45.208 ' 00:06:45.208 03:15:32 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:45.208 03:15:32 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:45.208 03:15:32 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=71764 00:06:45.208 03:15:32 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 71764 00:06:45.208 03:15:32 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 71764 ']' 00:06:45.208 03:15:32 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.208 03:15:32 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.208 03:15:32 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.208 03:15:32 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.208 03:15:32 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.468 [2024-11-21 03:15:32.855101] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:45.468 [2024-11-21 03:15:32.855257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71764 ] 00:06:45.468 [2024-11-21 03:15:32.997502] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:45.468 [2024-11-21 03:15:33.022220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.728 [2024-11-21 03:15:33.067160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.304 03:15:33 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.304 03:15:33 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:46.304 03:15:33 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:46.574 03:15:34 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 71764 00:06:46.574 03:15:34 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 71764 ']' 00:06:46.574 03:15:34 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 71764 00:06:46.574 03:15:34 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:46.574 03:15:34 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.574 03:15:34 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71764 00:06:46.574 killing process with pid 71764 00:06:46.574 03:15:34 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.574 03:15:34 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.574 03:15:34 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71764' 00:06:46.574 03:15:34 alias_rpc -- common/autotest_common.sh@973 -- # kill 71764 00:06:46.574 03:15:34 alias_rpc -- common/autotest_common.sh@978 -- # wait 71764 00:06:47.142 ************************************ 00:06:47.142 END TEST alias_rpc 00:06:47.142 ************************************ 00:06:47.142 00:06:47.142 real 0m2.183s 00:06:47.142 user 0m2.140s 00:06:47.142 sys 0m0.719s 00:06:47.142 03:15:34 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.142 03:15:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.401 03:15:34 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:47.401 03:15:34 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:47.401 03:15:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.401 03:15:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.401 03:15:34 -- common/autotest_common.sh@10 -- # set +x 00:06:47.401 ************************************ 00:06:47.401 START TEST spdkcli_tcp 00:06:47.401 ************************************ 00:06:47.401 03:15:34 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:47.401 * Looking for test storage... 00:06:47.401 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:47.401 03:15:34 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:47.401 03:15:34 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:47.401 03:15:34 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:47.660 03:15:35 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:47.660 03:15:35 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.660 03:15:35 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.660 03:15:35 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.660 03:15:35 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.660 03:15:35 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.660 03:15:35 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.660 03:15:35 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.660 03:15:35 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.660 03:15:35 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.660 03:15:35 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.660 03:15:35 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.660 03:15:35 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:47.660 03:15:35 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:47.660 03:15:35 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.660 03:15:35 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.660 03:15:35 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:47.660 03:15:35 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:47.660 03:15:35 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.660 03:15:35 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:47.660 03:15:35 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.660 03:15:35 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:47.660 03:15:35 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:47.661 03:15:35 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.661 03:15:35 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:47.661 03:15:35 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.661 03:15:35 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.661 03:15:35 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.661 03:15:35 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:47.661 03:15:35 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.661 03:15:35 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:47.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.661 --rc genhtml_branch_coverage=1 00:06:47.661 --rc genhtml_function_coverage=1 00:06:47.661 --rc genhtml_legend=1 00:06:47.661 --rc geninfo_all_blocks=1 00:06:47.661 --rc geninfo_unexecuted_blocks=1 00:06:47.661 00:06:47.661 ' 00:06:47.661 03:15:35 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:47.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.661 --rc genhtml_branch_coverage=1 00:06:47.661 --rc genhtml_function_coverage=1 00:06:47.661 --rc genhtml_legend=1 00:06:47.661 --rc geninfo_all_blocks=1 00:06:47.661 --rc geninfo_unexecuted_blocks=1 00:06:47.661 00:06:47.661 ' 00:06:47.661 03:15:35 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:47.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.661 --rc genhtml_branch_coverage=1 00:06:47.661 --rc genhtml_function_coverage=1 00:06:47.661 --rc genhtml_legend=1 00:06:47.661 --rc geninfo_all_blocks=1 00:06:47.661 --rc geninfo_unexecuted_blocks=1 00:06:47.661 00:06:47.661 ' 00:06:47.661 03:15:35 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:47.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.661 --rc genhtml_branch_coverage=1 00:06:47.661 --rc genhtml_function_coverage=1 00:06:47.661 --rc genhtml_legend=1 00:06:47.661 --rc geninfo_all_blocks=1 00:06:47.661 --rc geninfo_unexecuted_blocks=1 00:06:47.661 00:06:47.661 ' 00:06:47.661 03:15:35 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:47.661 03:15:35 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:47.661 03:15:35 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:47.661 03:15:35 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:47.661 03:15:35 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:47.661 03:15:35 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:47.661 03:15:35 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:47.661 03:15:35 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:47.661 03:15:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:47.661 03:15:35 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=71857 00:06:47.661 03:15:35 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:47.661 03:15:35 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 71857 00:06:47.661 03:15:35 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 71857 ']' 00:06:47.661 03:15:35 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.661 03:15:35 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.661 03:15:35 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.661 03:15:35 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.661 03:15:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:47.661 [2024-11-21 03:15:35.153976] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:47.661 [2024-11-21 03:15:35.154331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71857 ] 00:06:47.921 [2024-11-21 03:15:35.314084] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:47.921 [2024-11-21 03:15:35.353523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:47.921 [2024-11-21 03:15:35.400378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.921 [2024-11-21 03:15:35.400485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.489 03:15:36 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.489 03:15:36 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:48.489 03:15:36 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=71874 00:06:48.489 03:15:36 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:48.489 03:15:36 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:48.747 [ 00:06:48.747 "bdev_malloc_delete", 00:06:48.747 "bdev_malloc_create", 00:06:48.747 "bdev_null_resize", 00:06:48.747 "bdev_null_delete", 00:06:48.747 "bdev_null_create", 00:06:48.747 "bdev_nvme_cuse_unregister", 00:06:48.747 "bdev_nvme_cuse_register", 00:06:48.747 "bdev_opal_new_user", 00:06:48.747 "bdev_opal_set_lock_state", 00:06:48.747 "bdev_opal_delete", 00:06:48.747 "bdev_opal_get_info", 00:06:48.747 "bdev_opal_create", 00:06:48.747 "bdev_nvme_opal_revert", 00:06:48.747 "bdev_nvme_opal_init", 00:06:48.747 "bdev_nvme_send_cmd", 00:06:48.747 "bdev_nvme_set_keys", 00:06:48.747 "bdev_nvme_get_path_iostat", 00:06:48.747 "bdev_nvme_get_mdns_discovery_info", 00:06:48.747 "bdev_nvme_stop_mdns_discovery", 00:06:48.747 "bdev_nvme_start_mdns_discovery", 00:06:48.747 "bdev_nvme_set_multipath_policy", 00:06:48.747 "bdev_nvme_set_preferred_path", 00:06:48.747 "bdev_nvme_get_io_paths", 00:06:48.747 "bdev_nvme_remove_error_injection", 00:06:48.747 "bdev_nvme_add_error_injection", 00:06:48.747 "bdev_nvme_get_discovery_info", 00:06:48.747 "bdev_nvme_stop_discovery", 00:06:48.747 "bdev_nvme_start_discovery", 00:06:48.747 "bdev_nvme_get_controller_health_info", 00:06:48.747 "bdev_nvme_disable_controller", 00:06:48.747 "bdev_nvme_enable_controller", 00:06:48.747 "bdev_nvme_reset_controller", 00:06:48.747 "bdev_nvme_get_transport_statistics", 00:06:48.747 "bdev_nvme_apply_firmware", 00:06:48.747 "bdev_nvme_detach_controller", 00:06:48.747 "bdev_nvme_get_controllers", 00:06:48.747 "bdev_nvme_attach_controller", 00:06:48.747 "bdev_nvme_set_hotplug", 00:06:48.747 "bdev_nvme_set_options", 00:06:48.747 "bdev_passthru_delete", 00:06:48.747 "bdev_passthru_create", 00:06:48.747 "bdev_lvol_set_parent_bdev", 00:06:48.747 "bdev_lvol_set_parent", 00:06:48.747 "bdev_lvol_check_shallow_copy", 00:06:48.747 "bdev_lvol_start_shallow_copy", 00:06:48.748 "bdev_lvol_grow_lvstore", 00:06:48.748 "bdev_lvol_get_lvols", 00:06:48.748 "bdev_lvol_get_lvstores", 00:06:48.748 "bdev_lvol_delete", 00:06:48.748 "bdev_lvol_set_read_only", 00:06:48.748 "bdev_lvol_resize", 00:06:48.748 "bdev_lvol_decouple_parent", 00:06:48.748 "bdev_lvol_inflate", 00:06:48.748 "bdev_lvol_rename", 00:06:48.748 "bdev_lvol_clone_bdev", 00:06:48.748 "bdev_lvol_clone", 00:06:48.748 "bdev_lvol_snapshot", 00:06:48.748 "bdev_lvol_create", 00:06:48.748 "bdev_lvol_delete_lvstore", 00:06:48.748 "bdev_lvol_rename_lvstore", 00:06:48.748 "bdev_lvol_create_lvstore", 00:06:48.748 "bdev_raid_set_options", 00:06:48.748 "bdev_raid_remove_base_bdev", 00:06:48.748 "bdev_raid_add_base_bdev", 00:06:48.748 "bdev_raid_delete", 00:06:48.748 "bdev_raid_create", 00:06:48.748 "bdev_raid_get_bdevs", 00:06:48.748 "bdev_error_inject_error", 00:06:48.748 "bdev_error_delete", 00:06:48.748 "bdev_error_create", 00:06:48.748 "bdev_split_delete", 00:06:48.748 "bdev_split_create", 00:06:48.748 "bdev_delay_delete", 00:06:48.748 "bdev_delay_create", 00:06:48.748 "bdev_delay_update_latency", 00:06:48.748 "bdev_zone_block_delete", 00:06:48.748 "bdev_zone_block_create", 00:06:48.748 "blobfs_create", 00:06:48.748 "blobfs_detect", 00:06:48.748 "blobfs_set_cache_size", 00:06:48.748 "bdev_aio_delete", 00:06:48.748 "bdev_aio_rescan", 00:06:48.748 "bdev_aio_create", 00:06:48.748 "bdev_ftl_set_property", 00:06:48.748 "bdev_ftl_get_properties", 00:06:48.748 "bdev_ftl_get_stats", 00:06:48.748 "bdev_ftl_unmap", 00:06:48.748 "bdev_ftl_unload", 00:06:48.748 "bdev_ftl_delete", 00:06:48.748 "bdev_ftl_load", 00:06:48.748 "bdev_ftl_create", 00:06:48.748 "bdev_virtio_attach_controller", 00:06:48.748 "bdev_virtio_scsi_get_devices", 00:06:48.748 "bdev_virtio_detach_controller", 00:06:48.748 "bdev_virtio_blk_set_hotplug", 00:06:48.748 "bdev_iscsi_delete", 00:06:48.748 "bdev_iscsi_create", 00:06:48.748 "bdev_iscsi_set_options", 00:06:48.748 "accel_error_inject_error", 00:06:48.748 "ioat_scan_accel_module", 00:06:48.748 "dsa_scan_accel_module", 00:06:48.748 "iaa_scan_accel_module", 00:06:48.748 "keyring_file_remove_key", 00:06:48.748 "keyring_file_add_key", 00:06:48.748 "keyring_linux_set_options", 00:06:48.748 "fsdev_aio_delete", 00:06:48.748 "fsdev_aio_create", 00:06:48.748 "iscsi_get_histogram", 00:06:48.748 "iscsi_enable_histogram", 00:06:48.748 "iscsi_set_options", 00:06:48.748 "iscsi_get_auth_groups", 00:06:48.748 "iscsi_auth_group_remove_secret", 00:06:48.748 "iscsi_auth_group_add_secret", 00:06:48.748 "iscsi_delete_auth_group", 00:06:48.748 "iscsi_create_auth_group", 00:06:48.748 "iscsi_set_discovery_auth", 00:06:48.748 "iscsi_get_options", 00:06:48.748 "iscsi_target_node_request_logout", 00:06:48.748 "iscsi_target_node_set_redirect", 00:06:48.748 "iscsi_target_node_set_auth", 00:06:48.748 "iscsi_target_node_add_lun", 00:06:48.748 "iscsi_get_stats", 00:06:48.748 "iscsi_get_connections", 00:06:48.748 "iscsi_portal_group_set_auth", 00:06:48.748 "iscsi_start_portal_group", 00:06:48.748 "iscsi_delete_portal_group", 00:06:48.748 "iscsi_create_portal_group", 00:06:48.748 "iscsi_get_portal_groups", 00:06:48.748 "iscsi_delete_target_node", 00:06:48.748 "iscsi_target_node_remove_pg_ig_maps", 00:06:48.748 "iscsi_target_node_add_pg_ig_maps", 00:06:48.748 "iscsi_create_target_node", 00:06:48.748 "iscsi_get_target_nodes", 00:06:48.748 "iscsi_delete_initiator_group", 00:06:48.748 "iscsi_initiator_group_remove_initiators", 00:06:48.748 "iscsi_initiator_group_add_initiators", 00:06:48.748 "iscsi_create_initiator_group", 00:06:48.748 "iscsi_get_initiator_groups", 00:06:48.748 "nvmf_set_crdt", 00:06:48.748 "nvmf_set_config", 00:06:48.748 "nvmf_set_max_subsystems", 00:06:48.748 "nvmf_stop_mdns_prr", 00:06:48.748 "nvmf_publish_mdns_prr", 00:06:48.748 "nvmf_subsystem_get_listeners", 00:06:48.748 "nvmf_subsystem_get_qpairs", 00:06:48.748 "nvmf_subsystem_get_controllers", 00:06:48.748 "nvmf_get_stats", 00:06:48.748 "nvmf_get_transports", 00:06:48.748 "nvmf_create_transport", 00:06:48.748 "nvmf_get_targets", 00:06:48.748 "nvmf_delete_target", 00:06:48.748 "nvmf_create_target", 00:06:48.748 "nvmf_subsystem_allow_any_host", 00:06:48.748 "nvmf_subsystem_set_keys", 00:06:48.748 "nvmf_subsystem_remove_host", 00:06:48.748 "nvmf_subsystem_add_host", 00:06:48.748 "nvmf_ns_remove_host", 00:06:48.748 "nvmf_ns_add_host", 00:06:48.748 "nvmf_subsystem_remove_ns", 00:06:48.748 "nvmf_subsystem_set_ns_ana_group", 00:06:48.748 "nvmf_subsystem_add_ns", 00:06:48.748 "nvmf_subsystem_listener_set_ana_state", 00:06:48.748 "nvmf_discovery_get_referrals", 00:06:48.748 "nvmf_discovery_remove_referral", 00:06:48.748 "nvmf_discovery_add_referral", 00:06:48.748 "nvmf_subsystem_remove_listener", 00:06:48.748 "nvmf_subsystem_add_listener", 00:06:48.748 "nvmf_delete_subsystem", 00:06:48.748 "nvmf_create_subsystem", 00:06:48.748 "nvmf_get_subsystems", 00:06:48.748 "env_dpdk_get_mem_stats", 00:06:48.748 "nbd_get_disks", 00:06:48.748 "nbd_stop_disk", 00:06:48.748 "nbd_start_disk", 00:06:48.748 "ublk_recover_disk", 00:06:48.748 "ublk_get_disks", 00:06:48.748 "ublk_stop_disk", 00:06:48.748 "ublk_start_disk", 00:06:48.748 "ublk_destroy_target", 00:06:48.748 "ublk_create_target", 00:06:48.748 "virtio_blk_create_transport", 00:06:48.748 "virtio_blk_get_transports", 00:06:48.748 "vhost_controller_set_coalescing", 00:06:48.748 "vhost_get_controllers", 00:06:48.748 "vhost_delete_controller", 00:06:48.748 "vhost_create_blk_controller", 00:06:48.748 "vhost_scsi_controller_remove_target", 00:06:48.748 "vhost_scsi_controller_add_target", 00:06:48.748 "vhost_start_scsi_controller", 00:06:48.748 "vhost_create_scsi_controller", 00:06:48.748 "thread_set_cpumask", 00:06:48.748 "scheduler_set_options", 00:06:48.748 "framework_get_governor", 00:06:48.748 "framework_get_scheduler", 00:06:48.748 "framework_set_scheduler", 00:06:48.748 "framework_get_reactors", 00:06:48.748 "thread_get_io_channels", 00:06:48.748 "thread_get_pollers", 00:06:48.748 "thread_get_stats", 00:06:48.748 "framework_monitor_context_switch", 00:06:48.748 "spdk_kill_instance", 00:06:48.748 "log_enable_timestamps", 00:06:48.748 "log_get_flags", 00:06:48.748 "log_clear_flag", 00:06:48.748 "log_set_flag", 00:06:48.748 "log_get_level", 00:06:48.748 "log_set_level", 00:06:48.748 "log_get_print_level", 00:06:48.748 "log_set_print_level", 00:06:48.748 "framework_enable_cpumask_locks", 00:06:48.748 "framework_disable_cpumask_locks", 00:06:48.748 "framework_wait_init", 00:06:48.748 "framework_start_init", 00:06:48.748 "scsi_get_devices", 00:06:48.748 "bdev_get_histogram", 00:06:48.748 "bdev_enable_histogram", 00:06:48.748 "bdev_set_qos_limit", 00:06:48.748 "bdev_set_qd_sampling_period", 00:06:48.748 "bdev_get_bdevs", 00:06:48.748 "bdev_reset_iostat", 00:06:48.748 "bdev_get_iostat", 00:06:48.748 "bdev_examine", 00:06:48.748 "bdev_wait_for_examine", 00:06:48.748 "bdev_set_options", 00:06:48.748 "accel_get_stats", 00:06:48.748 "accel_set_options", 00:06:48.748 "accel_set_driver", 00:06:48.748 "accel_crypto_key_destroy", 00:06:48.748 "accel_crypto_keys_get", 00:06:48.748 "accel_crypto_key_create", 00:06:48.748 "accel_assign_opc", 00:06:48.748 "accel_get_module_info", 00:06:48.748 "accel_get_opc_assignments", 00:06:48.748 "vmd_rescan", 00:06:48.748 "vmd_remove_device", 00:06:48.748 "vmd_enable", 00:06:48.748 "sock_get_default_impl", 00:06:48.748 "sock_set_default_impl", 00:06:48.748 "sock_impl_set_options", 00:06:48.748 "sock_impl_get_options", 00:06:48.748 "iobuf_get_stats", 00:06:48.748 "iobuf_set_options", 00:06:48.748 "keyring_get_keys", 00:06:48.748 "framework_get_pci_devices", 00:06:48.748 "framework_get_config", 00:06:48.748 "framework_get_subsystems", 00:06:48.748 "fsdev_set_opts", 00:06:48.748 "fsdev_get_opts", 00:06:48.748 "trace_get_info", 00:06:48.748 "trace_get_tpoint_group_mask", 00:06:48.748 "trace_disable_tpoint_group", 00:06:48.748 "trace_enable_tpoint_group", 00:06:48.748 "trace_clear_tpoint_mask", 00:06:48.748 "trace_set_tpoint_mask", 00:06:48.748 "notify_get_notifications", 00:06:48.748 "notify_get_types", 00:06:48.748 "spdk_get_version", 00:06:48.748 "rpc_get_methods" 00:06:48.748 ] 00:06:48.748 03:15:36 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:48.748 03:15:36 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:48.748 03:15:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:48.748 03:15:36 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:48.748 03:15:36 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 71857 00:06:48.748 03:15:36 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 71857 ']' 00:06:48.748 03:15:36 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 71857 00:06:48.748 03:15:36 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:48.748 03:15:36 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.748 03:15:36 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71857 00:06:49.006 03:15:36 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.006 03:15:36 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.006 03:15:36 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71857' 00:06:49.006 killing process with pid 71857 00:06:49.006 03:15:36 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 71857 00:06:49.006 03:15:36 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 71857 00:06:49.570 ************************************ 00:06:49.570 END TEST spdkcli_tcp 00:06:49.570 ************************************ 00:06:49.570 00:06:49.570 real 0m2.191s 00:06:49.570 user 0m3.519s 00:06:49.570 sys 0m0.773s 00:06:49.570 03:15:36 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.570 03:15:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:49.570 03:15:37 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:49.570 03:15:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.570 03:15:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.570 03:15:37 -- common/autotest_common.sh@10 -- # set +x 00:06:49.570 ************************************ 00:06:49.570 START TEST dpdk_mem_utility 00:06:49.570 ************************************ 00:06:49.571 03:15:37 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:49.829 * Looking for test storage... 00:06:49.829 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:49.829 03:15:37 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:49.829 03:15:37 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:49.829 03:15:37 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:49.829 03:15:37 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:49.829 03:15:37 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.829 03:15:37 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.829 03:15:37 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.829 03:15:37 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.829 03:15:37 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.829 03:15:37 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.829 03:15:37 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.829 03:15:37 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.829 03:15:37 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.829 03:15:37 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.829 03:15:37 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.829 03:15:37 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:49.829 03:15:37 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:49.829 03:15:37 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.829 03:15:37 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.829 03:15:37 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:49.829 03:15:37 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:49.829 03:15:37 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.829 03:15:37 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:49.829 03:15:37 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.829 03:15:37 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:49.829 03:15:37 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:49.829 03:15:37 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.829 03:15:37 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:49.829 03:15:37 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.829 03:15:37 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.829 03:15:37 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.829 03:15:37 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:49.829 03:15:37 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.829 03:15:37 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:49.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.829 --rc genhtml_branch_coverage=1 00:06:49.829 --rc genhtml_function_coverage=1 00:06:49.829 --rc genhtml_legend=1 00:06:49.829 --rc geninfo_all_blocks=1 00:06:49.829 --rc geninfo_unexecuted_blocks=1 00:06:49.829 00:06:49.829 ' 00:06:49.829 03:15:37 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:49.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.829 --rc genhtml_branch_coverage=1 00:06:49.829 --rc genhtml_function_coverage=1 00:06:49.830 --rc genhtml_legend=1 00:06:49.830 --rc geninfo_all_blocks=1 00:06:49.830 --rc geninfo_unexecuted_blocks=1 00:06:49.830 00:06:49.830 ' 00:06:49.830 03:15:37 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:49.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.830 --rc genhtml_branch_coverage=1 00:06:49.830 --rc genhtml_function_coverage=1 00:06:49.830 --rc genhtml_legend=1 00:06:49.830 --rc geninfo_all_blocks=1 00:06:49.830 --rc geninfo_unexecuted_blocks=1 00:06:49.830 00:06:49.830 ' 00:06:49.830 03:15:37 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:49.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.830 --rc genhtml_branch_coverage=1 00:06:49.830 --rc genhtml_function_coverage=1 00:06:49.830 --rc genhtml_legend=1 00:06:49.830 --rc geninfo_all_blocks=1 00:06:49.830 --rc geninfo_unexecuted_blocks=1 00:06:49.830 00:06:49.830 ' 00:06:49.830 03:15:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:49.830 03:15:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=71957 00:06:49.830 03:15:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:49.830 03:15:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 71957 00:06:49.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.830 03:15:37 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 71957 ']' 00:06:49.830 03:15:37 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.830 03:15:37 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.830 03:15:37 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.830 03:15:37 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.830 03:15:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:49.830 [2024-11-21 03:15:37.375366] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:49.830 [2024-11-21 03:15:37.375616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71957 ] 00:06:50.088 [2024-11-21 03:15:37.518287] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:50.088 [2024-11-21 03:15:37.555725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.088 [2024-11-21 03:15:37.601219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.654 03:15:38 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.654 03:15:38 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:50.654 03:15:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:50.654 03:15:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:50.654 03:15:38 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.655 03:15:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:50.655 { 00:06:50.655 "filename": "/tmp/spdk_mem_dump.txt" 00:06:50.655 } 00:06:50.655 03:15:38 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.655 03:15:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:50.916 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:50.916 1 heaps totaling size 810.000000 MiB 00:06:50.916 size: 810.000000 MiB heap id: 0 00:06:50.916 end heaps---------- 00:06:50.916 9 mempools totaling size 595.772034 MiB 00:06:50.916 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:50.916 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:50.916 size: 92.545471 MiB name: bdev_io_71957 00:06:50.916 size: 50.003479 MiB name: msgpool_71957 00:06:50.916 size: 36.509338 MiB name: fsdev_io_71957 00:06:50.916 size: 21.763794 MiB name: PDU_Pool 00:06:50.916 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:50.916 size: 4.133484 MiB name: evtpool_71957 00:06:50.916 size: 0.026123 MiB name: Session_Pool 00:06:50.916 end mempools------- 00:06:50.916 6 memzones totaling size 4.142822 MiB 00:06:50.916 size: 1.000366 MiB name: RG_ring_0_71957 00:06:50.916 size: 1.000366 MiB name: RG_ring_1_71957 00:06:50.916 size: 1.000366 MiB name: RG_ring_4_71957 00:06:50.916 size: 1.000366 MiB name: RG_ring_5_71957 00:06:50.916 size: 0.125366 MiB name: RG_ring_2_71957 00:06:50.916 size: 0.015991 MiB name: RG_ring_3_71957 00:06:50.916 end memzones------- 00:06:50.916 03:15:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:50.916 heap id: 0 total size: 810.000000 MiB number of busy elements: 309 number of free elements: 15 00:06:50.916 list of free elements. size: 10.954529 MiB 00:06:50.916 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:50.916 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:50.916 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:50.916 element at address: 0x200000400000 with size: 0.993958 MiB 00:06:50.916 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:50.916 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:50.916 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:50.916 element at address: 0x200000200000 with size: 0.858093 MiB 00:06:50.916 element at address: 0x20001a600000 with size: 0.567322 MiB 00:06:50.916 element at address: 0x20000a600000 with size: 0.488892 MiB 00:06:50.916 element at address: 0x200000c00000 with size: 0.487000 MiB 00:06:50.916 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:50.916 element at address: 0x200003e00000 with size: 0.480286 MiB 00:06:50.916 element at address: 0x200027a00000 with size: 0.396667 MiB 00:06:50.916 element at address: 0x200000800000 with size: 0.351746 MiB 00:06:50.916 list of standard malloc elements. size: 199.126587 MiB 00:06:50.916 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:50.916 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:50.916 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:50.916 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:50.916 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:50.916 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:50.916 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:50.916 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:50.916 element at address: 0x2000002fbcc0 with size: 0.000183 MiB 00:06:50.916 element at address: 0x2000003fdec0 with size: 0.000183 MiB 00:06:50.916 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:06:50.916 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:06:50.916 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:06:50.916 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:06:50.916 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:06:50.916 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:06:50.916 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:06:50.917 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:06:50.917 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:06:50.917 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:06:50.917 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:06:50.917 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:06:50.917 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:06:50.917 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:06:50.917 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:06:50.917 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:06:50.917 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:06:50.917 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:06:50.917 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:06:50.917 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:06:50.917 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:50.917 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:50.917 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000085e580 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000087e840 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000087e900 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000087f080 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000087f140 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000087f200 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000087f380 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000087f440 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000087f500 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:50.917 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:50.917 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:50.917 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:50.917 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:50.917 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20001a6913c0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20001a691480 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20001a691540 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20001a691600 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20001a691780 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20001a691840 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20001a691900 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:06:50.917 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a692080 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a692140 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a692200 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a692380 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a692440 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a692500 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a692680 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a692740 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a692800 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a692980 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a693040 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a693100 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a693280 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a693340 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a693400 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a693580 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a693640 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a693700 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a693880 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a693940 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a694000 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a694180 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a694240 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a694300 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a694480 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a694540 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a694600 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a694780 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a694840 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a694900 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a695080 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a695140 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a695200 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:50.918 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a658c0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a65980 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6c580 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:06:50.918 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:50.919 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:50.919 list of memzone associated elements. size: 599.918884 MiB 00:06:50.919 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:50.919 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:50.919 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:50.919 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:50.919 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:50.919 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_71957_0 00:06:50.919 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:50.919 associated memzone info: size: 48.002930 MiB name: MP_msgpool_71957_0 00:06:50.919 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:50.919 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_71957_0 00:06:50.919 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:50.919 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:50.919 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:50.919 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:50.919 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:50.919 associated memzone info: size: 3.000122 MiB name: MP_evtpool_71957_0 00:06:50.919 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:50.919 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_71957 00:06:50.919 element at address: 0x2000002fbd80 with size: 1.008118 MiB 00:06:50.919 associated memzone info: size: 1.007996 MiB name: MP_evtpool_71957 00:06:50.919 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:50.919 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:50.919 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:50.919 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:50.919 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:50.919 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:50.919 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:50.919 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:50.919 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:50.919 associated memzone info: size: 1.000366 MiB name: RG_ring_0_71957 00:06:50.919 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:50.919 associated memzone info: size: 1.000366 MiB name: RG_ring_1_71957 00:06:50.919 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:50.919 associated memzone info: size: 1.000366 MiB name: RG_ring_4_71957 00:06:50.919 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:50.919 associated memzone info: size: 1.000366 MiB name: RG_ring_5_71957 00:06:50.919 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:50.919 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_71957 00:06:50.919 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:50.919 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_71957 00:06:50.919 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:50.919 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:50.919 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:50.919 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:50.919 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:50.919 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:50.919 element at address: 0x2000002dbac0 with size: 0.125488 MiB 00:06:50.919 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_71957 00:06:50.919 element at address: 0x20000085e640 with size: 0.125488 MiB 00:06:50.919 associated memzone info: size: 0.125366 MiB name: RG_ring_2_71957 00:06:50.919 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:50.919 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:50.919 element at address: 0x200027a65a40 with size: 0.023743 MiB 00:06:50.919 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:50.919 element at address: 0x20000085a380 with size: 0.016113 MiB 00:06:50.919 associated memzone info: size: 0.015991 MiB name: RG_ring_3_71957 00:06:50.919 element at address: 0x200027a6bb80 with size: 0.002441 MiB 00:06:50.919 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:50.919 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:06:50.919 associated memzone info: size: 0.000183 MiB name: MP_msgpool_71957 00:06:50.919 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:50.919 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_71957 00:06:50.919 element at address: 0x20000085a180 with size: 0.000305 MiB 00:06:50.919 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_71957 00:06:50.919 element at address: 0x200027a6c640 with size: 0.000305 MiB 00:06:50.919 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:50.919 03:15:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:50.919 03:15:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 71957 00:06:50.919 03:15:38 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 71957 ']' 00:06:50.919 03:15:38 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 71957 00:06:50.919 03:15:38 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:50.919 03:15:38 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.919 03:15:38 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71957 00:06:50.919 03:15:38 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:50.919 03:15:38 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:50.919 03:15:38 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71957' 00:06:50.919 killing process with pid 71957 00:06:50.919 03:15:38 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 71957 00:06:50.919 03:15:38 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 71957 00:06:51.487 00:06:51.487 real 0m1.957s 00:06:51.487 user 0m1.753s 00:06:51.487 sys 0m0.686s 00:06:51.487 03:15:38 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.487 03:15:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:51.487 ************************************ 00:06:51.487 END TEST dpdk_mem_utility 00:06:51.487 ************************************ 00:06:51.487 03:15:39 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:51.487 03:15:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.487 03:15:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.487 03:15:39 -- common/autotest_common.sh@10 -- # set +x 00:06:51.487 ************************************ 00:06:51.487 START TEST event 00:06:51.487 ************************************ 00:06:51.487 03:15:39 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:51.744 * Looking for test storage... 00:06:51.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:51.744 03:15:39 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:51.744 03:15:39 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:51.744 03:15:39 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:51.744 03:15:39 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:51.745 03:15:39 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.745 03:15:39 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.745 03:15:39 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.745 03:15:39 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.745 03:15:39 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.745 03:15:39 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.745 03:15:39 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.745 03:15:39 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.745 03:15:39 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.745 03:15:39 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.745 03:15:39 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.745 03:15:39 event -- scripts/common.sh@344 -- # case "$op" in 00:06:51.745 03:15:39 event -- scripts/common.sh@345 -- # : 1 00:06:51.745 03:15:39 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.745 03:15:39 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.745 03:15:39 event -- scripts/common.sh@365 -- # decimal 1 00:06:51.745 03:15:39 event -- scripts/common.sh@353 -- # local d=1 00:06:51.745 03:15:39 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.745 03:15:39 event -- scripts/common.sh@355 -- # echo 1 00:06:51.745 03:15:39 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.745 03:15:39 event -- scripts/common.sh@366 -- # decimal 2 00:06:51.745 03:15:39 event -- scripts/common.sh@353 -- # local d=2 00:06:51.745 03:15:39 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.745 03:15:39 event -- scripts/common.sh@355 -- # echo 2 00:06:51.745 03:15:39 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.745 03:15:39 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.745 03:15:39 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.745 03:15:39 event -- scripts/common.sh@368 -- # return 0 00:06:51.745 03:15:39 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.745 03:15:39 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:51.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.745 --rc genhtml_branch_coverage=1 00:06:51.745 --rc genhtml_function_coverage=1 00:06:51.745 --rc genhtml_legend=1 00:06:51.745 --rc geninfo_all_blocks=1 00:06:51.745 --rc geninfo_unexecuted_blocks=1 00:06:51.745 00:06:51.745 ' 00:06:51.745 03:15:39 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:51.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.745 --rc genhtml_branch_coverage=1 00:06:51.745 --rc genhtml_function_coverage=1 00:06:51.745 --rc genhtml_legend=1 00:06:51.745 --rc geninfo_all_blocks=1 00:06:51.745 --rc geninfo_unexecuted_blocks=1 00:06:51.745 00:06:51.745 ' 00:06:51.745 03:15:39 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:51.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.745 --rc genhtml_branch_coverage=1 00:06:51.745 --rc genhtml_function_coverage=1 00:06:51.745 --rc genhtml_legend=1 00:06:51.745 --rc geninfo_all_blocks=1 00:06:51.745 --rc geninfo_unexecuted_blocks=1 00:06:51.745 00:06:51.745 ' 00:06:51.745 03:15:39 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:51.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.745 --rc genhtml_branch_coverage=1 00:06:51.745 --rc genhtml_function_coverage=1 00:06:51.745 --rc genhtml_legend=1 00:06:51.745 --rc geninfo_all_blocks=1 00:06:51.745 --rc geninfo_unexecuted_blocks=1 00:06:51.745 00:06:51.745 ' 00:06:51.745 03:15:39 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:51.745 03:15:39 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:51.745 03:15:39 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:51.745 03:15:39 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:51.745 03:15:39 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.745 03:15:39 event -- common/autotest_common.sh@10 -- # set +x 00:06:51.745 ************************************ 00:06:51.745 START TEST event_perf 00:06:51.745 ************************************ 00:06:51.745 03:15:39 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:52.003 Running I/O for 1 seconds...[2024-11-21 03:15:39.327554] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:52.003 [2024-11-21 03:15:39.327755] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72043 ] 00:06:52.003 [2024-11-21 03:15:39.469635] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:52.003 [2024-11-21 03:15:39.505591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:52.003 [2024-11-21 03:15:39.556089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.003 [2024-11-21 03:15:39.556354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.003 [2024-11-21 03:15:39.556453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.003 Running I/O for 1 seconds...[2024-11-21 03:15:39.556311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.379 00:06:53.379 lcore 0: 101486 00:06:53.379 lcore 1: 101490 00:06:53.379 lcore 2: 101488 00:06:53.379 lcore 3: 101487 00:06:53.379 done. 00:06:53.379 00:06:53.379 real 0m1.379s 00:06:53.379 user 0m4.108s 00:06:53.379 sys 0m0.147s 00:06:53.379 03:15:40 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.379 03:15:40 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:53.379 ************************************ 00:06:53.379 END TEST event_perf 00:06:53.379 ************************************ 00:06:53.379 03:15:40 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:53.379 03:15:40 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:53.379 03:15:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.379 03:15:40 event -- common/autotest_common.sh@10 -- # set +x 00:06:53.379 ************************************ 00:06:53.379 START TEST event_reactor 00:06:53.379 ************************************ 00:06:53.379 03:15:40 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:53.379 [2024-11-21 03:15:40.770728] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:53.379 [2024-11-21 03:15:40.770889] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72078 ] 00:06:53.379 [2024-11-21 03:15:40.910009] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:53.379 [2024-11-21 03:15:40.936731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.638 [2024-11-21 03:15:40.994089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.575 test_start 00:06:54.575 oneshot 00:06:54.575 tick 100 00:06:54.575 tick 100 00:06:54.575 tick 250 00:06:54.575 tick 100 00:06:54.575 tick 100 00:06:54.575 tick 100 00:06:54.575 tick 250 00:06:54.575 tick 500 00:06:54.575 tick 100 00:06:54.575 tick 100 00:06:54.575 tick 250 00:06:54.575 tick 100 00:06:54.575 tick 100 00:06:54.575 test_end 00:06:54.575 00:06:54.575 real 0m1.364s 00:06:54.575 user 0m1.137s 00:06:54.575 sys 0m0.120s 00:06:54.575 ************************************ 00:06:54.575 END TEST event_reactor 00:06:54.575 ************************************ 00:06:54.575 03:15:42 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.575 03:15:42 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:54.835 03:15:42 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:54.835 03:15:42 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:54.835 03:15:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.835 03:15:42 event -- common/autotest_common.sh@10 -- # set +x 00:06:54.835 ************************************ 00:06:54.835 START TEST event_reactor_perf 00:06:54.835 ************************************ 00:06:54.835 03:15:42 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:54.835 [2024-11-21 03:15:42.196706] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:54.835 [2024-11-21 03:15:42.196862] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72119 ] 00:06:54.835 [2024-11-21 03:15:42.334503] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:54.835 [2024-11-21 03:15:42.373580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.094 [2024-11-21 03:15:42.417781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.034 test_start 00:06:56.034 test_end 00:06:56.034 Performance: 354497 events per second 00:06:56.034 00:06:56.034 real 0m1.330s 00:06:56.034 user 0m1.129s 00:06:56.034 sys 0m0.094s 00:06:56.034 03:15:43 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.034 ************************************ 00:06:56.034 END TEST event_reactor_perf 00:06:56.034 ************************************ 00:06:56.034 03:15:43 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:56.034 03:15:43 event -- event/event.sh@49 -- # uname -s 00:06:56.034 03:15:43 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:56.034 03:15:43 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:56.034 03:15:43 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.034 03:15:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.034 03:15:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:56.034 ************************************ 00:06:56.034 START TEST event_scheduler 00:06:56.034 ************************************ 00:06:56.034 03:15:43 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:56.294 * Looking for test storage... 00:06:56.294 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:56.294 03:15:43 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:56.294 03:15:43 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:56.294 03:15:43 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:56.294 03:15:43 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:56.294 03:15:43 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.294 03:15:43 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.294 03:15:43 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.294 03:15:43 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.294 03:15:43 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.294 03:15:43 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.294 03:15:43 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.294 03:15:43 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.294 03:15:43 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.294 03:15:43 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.294 03:15:43 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.294 03:15:43 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:56.294 03:15:43 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:56.294 03:15:43 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.294 03:15:43 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.294 03:15:43 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:56.294 03:15:43 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:56.294 03:15:43 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.294 03:15:43 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:56.294 03:15:43 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.294 03:15:43 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:56.294 03:15:43 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:56.294 03:15:43 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.294 03:15:43 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:56.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.294 03:15:43 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.294 03:15:43 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.294 03:15:43 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.294 03:15:43 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:56.294 03:15:43 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.294 03:15:43 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:56.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.294 --rc genhtml_branch_coverage=1 00:06:56.294 --rc genhtml_function_coverage=1 00:06:56.294 --rc genhtml_legend=1 00:06:56.294 --rc geninfo_all_blocks=1 00:06:56.294 --rc geninfo_unexecuted_blocks=1 00:06:56.294 00:06:56.294 ' 00:06:56.294 03:15:43 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:56.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.294 --rc genhtml_branch_coverage=1 00:06:56.294 --rc genhtml_function_coverage=1 00:06:56.294 --rc genhtml_legend=1 00:06:56.294 --rc geninfo_all_blocks=1 00:06:56.294 --rc geninfo_unexecuted_blocks=1 00:06:56.294 00:06:56.294 ' 00:06:56.294 03:15:43 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:56.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.294 --rc genhtml_branch_coverage=1 00:06:56.294 --rc genhtml_function_coverage=1 00:06:56.294 --rc genhtml_legend=1 00:06:56.294 --rc geninfo_all_blocks=1 00:06:56.294 --rc geninfo_unexecuted_blocks=1 00:06:56.294 00:06:56.294 ' 00:06:56.294 03:15:43 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:56.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.294 --rc genhtml_branch_coverage=1 00:06:56.294 --rc genhtml_function_coverage=1 00:06:56.294 --rc genhtml_legend=1 00:06:56.294 --rc geninfo_all_blocks=1 00:06:56.294 --rc geninfo_unexecuted_blocks=1 00:06:56.294 00:06:56.294 ' 00:06:56.294 03:15:43 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:56.294 03:15:43 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=72185 00:06:56.294 03:15:43 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:56.294 03:15:43 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 72185 00:06:56.294 03:15:43 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 72185 ']' 00:06:56.294 03:15:43 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.294 03:15:43 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.294 03:15:43 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.294 03:15:43 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:56.294 03:15:43 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.294 03:15:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:56.554 [2024-11-21 03:15:43.863276] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:06:56.554 [2024-11-21 03:15:43.863507] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72185 ] 00:06:56.554 [2024-11-21 03:15:44.006923] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:56.554 [2024-11-21 03:15:44.029349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:56.554 [2024-11-21 03:15:44.065435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.554 [2024-11-21 03:15:44.065439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.554 [2024-11-21 03:15:44.065569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.554 [2024-11-21 03:15:44.065694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:57.495 03:15:44 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.495 03:15:44 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:57.495 03:15:44 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:57.495 03:15:44 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.495 03:15:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:57.495 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:57.495 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:57.495 POWER: intel_pstate driver is not supported 00:06:57.495 POWER: cppc_cpufreq driver is not supported 00:06:57.495 POWER: amd-pstate driver is not supported 00:06:57.495 POWER: acpi-cpufreq driver is not supported 00:06:57.495 POWER: Unable to set Power Management Environment for lcore 0 00:06:57.495 [2024-11-21 03:15:44.780147] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:57.495 [2024-11-21 03:15:44.780265] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:57.495 [2024-11-21 03:15:44.780317] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:57.495 [2024-11-21 03:15:44.780372] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:57.495 [2024-11-21 03:15:44.780417] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:57.495 [2024-11-21 03:15:44.780451] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:57.495 03:15:44 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.495 03:15:44 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:57.495 03:15:44 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.495 03:15:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:57.495 [2024-11-21 03:15:44.854843] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:57.495 03:15:44 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.495 03:15:44 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:57.495 03:15:44 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.495 03:15:44 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.495 03:15:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:57.495 ************************************ 00:06:57.495 START TEST scheduler_create_thread 00:06:57.495 ************************************ 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.495 2 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.495 3 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.495 4 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.495 5 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.495 6 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.495 7 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.495 8 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.495 9 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.495 03:15:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.082 10 00:06:58.082 03:15:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.082 03:15:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:58.082 03:15:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.082 03:15:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.464 03:15:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.464 03:15:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:59.464 03:15:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:59.464 03:15:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.464 03:15:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.103 03:15:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.103 03:15:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:00.103 03:15:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.103 03:15:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.039 03:15:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.039 03:15:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:01.039 03:15:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:01.039 03:15:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.039 03:15:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.606 03:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.606 00:07:01.606 real 0m4.219s 00:07:01.606 user 0m0.029s 00:07:01.606 sys 0m0.008s 00:07:01.606 ************************************ 00:07:01.606 END TEST scheduler_create_thread 00:07:01.606 ************************************ 00:07:01.606 03:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.606 03:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.606 03:15:49 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:01.606 03:15:49 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 72185 00:07:01.606 03:15:49 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 72185 ']' 00:07:01.606 03:15:49 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 72185 00:07:01.606 03:15:49 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:07:01.606 03:15:49 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.606 03:15:49 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72185 00:07:01.865 killing process with pid 72185 00:07:01.865 03:15:49 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:01.865 03:15:49 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:01.865 03:15:49 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72185' 00:07:01.865 03:15:49 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 72185 00:07:01.865 03:15:49 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 72185 00:07:01.865 [2024-11-21 03:15:49.369551] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:02.124 00:07:02.124 real 0m6.102s 00:07:02.124 user 0m13.345s 00:07:02.124 sys 0m0.512s 00:07:02.124 03:15:49 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.124 ************************************ 00:07:02.124 END TEST event_scheduler 00:07:02.124 ************************************ 00:07:02.124 03:15:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:02.383 03:15:49 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:02.383 03:15:49 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:02.383 03:15:49 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.383 03:15:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.383 03:15:49 event -- common/autotest_common.sh@10 -- # set +x 00:07:02.383 ************************************ 00:07:02.383 START TEST app_repeat 00:07:02.383 ************************************ 00:07:02.383 03:15:49 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:07:02.383 03:15:49 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.383 03:15:49 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.383 03:15:49 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:02.383 03:15:49 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:02.383 03:15:49 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:02.383 03:15:49 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:02.383 03:15:49 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:02.383 03:15:49 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:02.383 03:15:49 event.app_repeat -- event/event.sh@19 -- # repeat_pid=72304 00:07:02.383 Process app_repeat pid: 72304 00:07:02.383 spdk_app_start Round 0 00:07:02.383 03:15:49 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:02.383 03:15:49 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 72304' 00:07:02.383 03:15:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:02.384 03:15:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:02.384 03:15:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72304 /var/tmp/spdk-nbd.sock 00:07:02.384 03:15:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72304 ']' 00:07:02.384 03:15:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:02.384 03:15:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.384 03:15:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:02.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:02.384 03:15:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.384 03:15:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:02.384 [2024-11-21 03:15:49.796401] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:07:02.384 [2024-11-21 03:15:49.796645] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72304 ] 00:07:02.384 [2024-11-21 03:15:49.934436] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:02.642 [2024-11-21 03:15:49.970120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:02.642 [2024-11-21 03:15:50.018631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.642 [2024-11-21 03:15:50.018660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.210 03:15:50 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.210 03:15:50 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:03.210 03:15:50 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:03.469 Malloc0 00:07:03.469 03:15:50 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:03.729 Malloc1 00:07:03.729 03:15:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:03.729 03:15:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.729 03:15:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:03.729 03:15:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:03.729 03:15:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.729 03:15:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:03.729 03:15:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:03.729 03:15:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.729 03:15:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:03.729 03:15:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:03.729 03:15:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.729 03:15:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:03.729 03:15:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:03.729 03:15:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:03.730 03:15:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:03.730 03:15:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:03.989 /dev/nbd0 00:07:03.989 03:15:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:03.989 03:15:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:03.989 03:15:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:03.989 03:15:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:03.989 03:15:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:03.989 03:15:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:03.989 03:15:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:03.989 03:15:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:03.989 03:15:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:03.989 03:15:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:03.989 03:15:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:03.989 1+0 records in 00:07:03.989 1+0 records out 00:07:03.989 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000651973 s, 6.3 MB/s 00:07:03.989 03:15:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:03.989 03:15:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:03.989 03:15:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:03.989 03:15:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:03.989 03:15:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:03.989 03:15:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:03.989 03:15:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:03.989 03:15:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:04.249 /dev/nbd1 00:07:04.249 03:15:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:04.249 03:15:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:04.249 03:15:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:04.249 03:15:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:04.249 03:15:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.249 03:15:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.249 03:15:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:04.249 03:15:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:04.249 03:15:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.249 03:15:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.249 03:15:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:04.249 1+0 records in 00:07:04.249 1+0 records out 00:07:04.249 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405172 s, 10.1 MB/s 00:07:04.249 03:15:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:04.249 03:15:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:04.249 03:15:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:04.249 03:15:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.249 03:15:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:04.249 03:15:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.249 03:15:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.249 03:15:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:04.249 03:15:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.249 03:15:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.508 03:15:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:04.508 { 00:07:04.508 "nbd_device": "/dev/nbd0", 00:07:04.508 "bdev_name": "Malloc0" 00:07:04.508 }, 00:07:04.508 { 00:07:04.508 "nbd_device": "/dev/nbd1", 00:07:04.508 "bdev_name": "Malloc1" 00:07:04.508 } 00:07:04.508 ]' 00:07:04.508 03:15:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:04.508 { 00:07:04.508 "nbd_device": "/dev/nbd0", 00:07:04.508 "bdev_name": "Malloc0" 00:07:04.508 }, 00:07:04.508 { 00:07:04.508 "nbd_device": "/dev/nbd1", 00:07:04.508 "bdev_name": "Malloc1" 00:07:04.508 } 00:07:04.508 ]' 00:07:04.508 03:15:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:04.768 /dev/nbd1' 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:04.768 /dev/nbd1' 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:04.768 256+0 records in 00:07:04.768 256+0 records out 00:07:04.768 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00694109 s, 151 MB/s 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:04.768 256+0 records in 00:07:04.768 256+0 records out 00:07:04.768 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258258 s, 40.6 MB/s 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:04.768 256+0 records in 00:07:04.768 256+0 records out 00:07:04.768 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244233 s, 42.9 MB/s 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.768 03:15:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:05.027 03:15:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:05.027 03:15:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:05.027 03:15:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:05.027 03:15:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.027 03:15:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.027 03:15:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:05.027 03:15:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:05.027 03:15:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.027 03:15:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.027 03:15:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:05.286 03:15:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:05.286 03:15:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:05.286 03:15:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:05.286 03:15:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.286 03:15:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.286 03:15:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:05.286 03:15:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:05.286 03:15:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.286 03:15:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.286 03:15:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.287 03:15:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.545 03:15:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:05.546 03:15:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:05.546 03:15:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.546 03:15:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:05.546 03:15:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:05.546 03:15:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.546 03:15:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:05.546 03:15:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:05.546 03:15:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:05.546 03:15:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:05.546 03:15:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:05.546 03:15:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:05.546 03:15:53 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:05.804 03:15:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:06.096 [2024-11-21 03:15:53.579262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:06.096 [2024-11-21 03:15:53.622613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.096 [2024-11-21 03:15:53.622614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.354 [2024-11-21 03:15:53.701649] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:06.354 [2024-11-21 03:15:53.701766] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:08.889 03:15:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:08.889 spdk_app_start Round 1 00:07:08.889 03:15:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:08.889 03:15:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72304 /var/tmp/spdk-nbd.sock 00:07:08.889 03:15:56 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72304 ']' 00:07:08.889 03:15:56 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:08.889 03:15:56 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:08.889 03:15:56 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:08.889 03:15:56 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.889 03:15:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:09.147 03:15:56 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.147 03:15:56 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:09.147 03:15:56 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:09.406 Malloc0 00:07:09.406 03:15:56 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:09.665 Malloc1 00:07:09.665 03:15:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:09.665 03:15:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.665 03:15:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:09.665 03:15:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:09.665 03:15:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:09.665 03:15:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:09.665 03:15:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:09.665 03:15:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.665 03:15:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:09.665 03:15:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:09.665 03:15:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:09.665 03:15:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:09.665 03:15:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:09.665 03:15:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:09.665 03:15:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:09.665 03:15:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:09.924 /dev/nbd0 00:07:09.924 03:15:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:09.924 03:15:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:09.924 03:15:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:09.924 03:15:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:09.924 03:15:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.924 03:15:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.924 03:15:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:09.924 03:15:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:09.924 03:15:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.924 03:15:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.924 03:15:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:09.924 1+0 records in 00:07:09.924 1+0 records out 00:07:09.924 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437536 s, 9.4 MB/s 00:07:09.924 03:15:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:09.924 03:15:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:09.924 03:15:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:09.924 03:15:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.924 03:15:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:09.924 03:15:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.924 03:15:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:09.924 03:15:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:10.184 /dev/nbd1 00:07:10.184 03:15:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:10.184 03:15:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:10.184 03:15:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:10.184 03:15:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:10.184 03:15:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:10.184 03:15:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:10.184 03:15:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:10.184 03:15:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:10.184 03:15:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:10.184 03:15:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:10.184 03:15:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:10.184 1+0 records in 00:07:10.184 1+0 records out 00:07:10.184 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000462282 s, 8.9 MB/s 00:07:10.184 03:15:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:10.184 03:15:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:10.184 03:15:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:10.184 03:15:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:10.184 03:15:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:10.184 03:15:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:10.184 03:15:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:10.184 03:15:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:10.184 03:15:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.184 03:15:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:10.443 03:15:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:10.443 { 00:07:10.443 "nbd_device": "/dev/nbd0", 00:07:10.443 "bdev_name": "Malloc0" 00:07:10.443 }, 00:07:10.443 { 00:07:10.443 "nbd_device": "/dev/nbd1", 00:07:10.443 "bdev_name": "Malloc1" 00:07:10.443 } 00:07:10.443 ]' 00:07:10.443 03:15:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:10.443 { 00:07:10.443 "nbd_device": "/dev/nbd0", 00:07:10.443 "bdev_name": "Malloc0" 00:07:10.443 }, 00:07:10.443 { 00:07:10.443 "nbd_device": "/dev/nbd1", 00:07:10.443 "bdev_name": "Malloc1" 00:07:10.443 } 00:07:10.443 ]' 00:07:10.443 03:15:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:10.443 03:15:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:10.443 /dev/nbd1' 00:07:10.443 03:15:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:10.443 /dev/nbd1' 00:07:10.443 03:15:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:10.443 03:15:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:10.443 03:15:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:10.443 03:15:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:10.443 03:15:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:10.443 03:15:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:10.443 03:15:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.443 03:15:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:10.443 03:15:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:10.443 03:15:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:10.443 03:15:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:10.443 03:15:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:10.443 256+0 records in 00:07:10.443 256+0 records out 00:07:10.443 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129663 s, 80.9 MB/s 00:07:10.443 03:15:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.443 03:15:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:10.704 256+0 records in 00:07:10.704 256+0 records out 00:07:10.704 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257334 s, 40.7 MB/s 00:07:10.704 03:15:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.704 03:15:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:10.704 256+0 records in 00:07:10.704 256+0 records out 00:07:10.704 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0232987 s, 45.0 MB/s 00:07:10.704 03:15:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:10.704 03:15:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.704 03:15:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:10.704 03:15:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:10.704 03:15:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:10.704 03:15:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:10.704 03:15:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:10.704 03:15:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.704 03:15:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:10.704 03:15:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.704 03:15:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:10.704 03:15:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:10.704 03:15:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:10.704 03:15:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.704 03:15:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.704 03:15:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:10.704 03:15:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:10.704 03:15:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.704 03:15:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:10.964 03:15:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:10.964 03:15:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:10.964 03:15:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:10.964 03:15:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.964 03:15:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.964 03:15:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:10.964 03:15:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:10.964 03:15:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.964 03:15:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.964 03:15:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:10.964 03:15:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:10.964 03:15:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:10.964 03:15:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:10.964 03:15:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.964 03:15:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.964 03:15:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:10.964 03:15:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:10.964 03:15:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.964 03:15:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:10.964 03:15:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.964 03:15:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:11.224 03:15:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:11.224 03:15:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:11.224 03:15:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:11.484 03:15:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:11.484 03:15:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:11.484 03:15:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:11.484 03:15:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:11.484 03:15:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:11.484 03:15:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:11.484 03:15:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:11.484 03:15:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:11.484 03:15:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:11.484 03:15:58 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:11.743 03:15:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:11.743 [2024-11-21 03:15:59.216118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:11.743 [2024-11-21 03:15:59.247707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.743 [2024-11-21 03:15:59.247715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.743 [2024-11-21 03:15:59.292345] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:11.743 [2024-11-21 03:15:59.292433] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:15.037 03:16:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:15.037 spdk_app_start Round 2 00:07:15.037 03:16:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:15.037 03:16:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72304 /var/tmp/spdk-nbd.sock 00:07:15.037 03:16:02 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72304 ']' 00:07:15.037 03:16:02 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:15.037 03:16:02 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:15.037 03:16:02 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:15.037 03:16:02 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.037 03:16:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:15.037 03:16:02 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.037 03:16:02 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:15.037 03:16:02 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:15.037 Malloc0 00:07:15.037 03:16:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:15.297 Malloc1 00:07:15.297 03:16:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:15.297 03:16:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.297 03:16:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:15.297 03:16:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:15.297 03:16:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.297 03:16:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:15.297 03:16:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:15.297 03:16:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.297 03:16:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:15.297 03:16:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:15.297 03:16:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.297 03:16:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:15.297 03:16:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:15.297 03:16:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:15.297 03:16:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:15.297 03:16:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:15.557 /dev/nbd0 00:07:15.557 03:16:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:15.557 03:16:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:15.557 03:16:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:15.557 03:16:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:15.557 03:16:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.557 03:16:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.557 03:16:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:15.557 03:16:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:15.557 03:16:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:15.557 03:16:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:15.557 03:16:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:15.557 1+0 records in 00:07:15.557 1+0 records out 00:07:15.557 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549517 s, 7.5 MB/s 00:07:15.557 03:16:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:15.557 03:16:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:15.557 03:16:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:15.557 03:16:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:15.557 03:16:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:15.557 03:16:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:15.557 03:16:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:15.557 03:16:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:15.816 /dev/nbd1 00:07:15.817 03:16:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:15.817 03:16:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:15.817 03:16:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:15.817 03:16:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:15.817 03:16:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.817 03:16:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.817 03:16:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:16.076 03:16:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:16.076 03:16:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:16.076 03:16:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:16.076 03:16:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:16.076 1+0 records in 00:07:16.076 1+0 records out 00:07:16.076 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454775 s, 9.0 MB/s 00:07:16.076 03:16:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:16.076 03:16:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:16.076 03:16:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:16.076 03:16:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:16.076 03:16:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:16.076 03:16:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.076 03:16:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:16.076 03:16:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:16.076 03:16:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.076 03:16:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:16.076 03:16:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:16.076 { 00:07:16.076 "nbd_device": "/dev/nbd0", 00:07:16.076 "bdev_name": "Malloc0" 00:07:16.076 }, 00:07:16.076 { 00:07:16.076 "nbd_device": "/dev/nbd1", 00:07:16.076 "bdev_name": "Malloc1" 00:07:16.076 } 00:07:16.076 ]' 00:07:16.076 03:16:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:16.076 { 00:07:16.076 "nbd_device": "/dev/nbd0", 00:07:16.076 "bdev_name": "Malloc0" 00:07:16.076 }, 00:07:16.076 { 00:07:16.076 "nbd_device": "/dev/nbd1", 00:07:16.076 "bdev_name": "Malloc1" 00:07:16.076 } 00:07:16.076 ]' 00:07:16.076 03:16:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:16.335 /dev/nbd1' 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:16.335 /dev/nbd1' 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:16.335 256+0 records in 00:07:16.335 256+0 records out 00:07:16.335 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129503 s, 81.0 MB/s 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:16.335 256+0 records in 00:07:16.335 256+0 records out 00:07:16.335 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0281986 s, 37.2 MB/s 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:16.335 256+0 records in 00:07:16.335 256+0 records out 00:07:16.335 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248603 s, 42.2 MB/s 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.335 03:16:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:16.594 03:16:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:16.594 03:16:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:16.594 03:16:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:16.595 03:16:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.595 03:16:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.595 03:16:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:16.595 03:16:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:16.595 03:16:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.595 03:16:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.595 03:16:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:16.853 03:16:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:16.853 03:16:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:16.853 03:16:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:16.853 03:16:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.853 03:16:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.853 03:16:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:16.853 03:16:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:16.853 03:16:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.853 03:16:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:16.853 03:16:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.853 03:16:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:17.112 03:16:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:17.112 03:16:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:17.112 03:16:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:17.112 03:16:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:17.112 03:16:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:17.112 03:16:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:17.112 03:16:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:17.112 03:16:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:17.112 03:16:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:17.112 03:16:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:17.112 03:16:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:17.112 03:16:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:17.112 03:16:04 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:17.370 03:16:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:17.370 [2024-11-21 03:16:04.883805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:17.370 [2024-11-21 03:16:04.916595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.370 [2024-11-21 03:16:04.916601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.628 [2024-11-21 03:16:04.962116] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:17.628 [2024-11-21 03:16:04.962198] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:20.918 03:16:07 event.app_repeat -- event/event.sh@38 -- # waitforlisten 72304 /var/tmp/spdk-nbd.sock 00:07:20.918 03:16:07 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72304 ']' 00:07:20.918 03:16:07 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:20.918 03:16:07 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:20.918 03:16:07 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:20.918 03:16:07 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.918 03:16:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:20.918 03:16:07 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.918 03:16:07 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:20.918 03:16:07 event.app_repeat -- event/event.sh@39 -- # killprocess 72304 00:07:20.918 03:16:07 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 72304 ']' 00:07:20.918 03:16:07 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 72304 00:07:20.918 03:16:07 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:20.918 03:16:07 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.918 03:16:07 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72304 00:07:20.918 killing process with pid 72304 00:07:20.918 03:16:07 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.918 03:16:07 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.918 03:16:07 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72304' 00:07:20.918 03:16:08 event.app_repeat -- common/autotest_common.sh@973 -- # kill 72304 00:07:20.918 03:16:08 event.app_repeat -- common/autotest_common.sh@978 -- # wait 72304 00:07:20.918 spdk_app_start is called in Round 0. 00:07:20.918 Shutdown signal received, stop current app iteration 00:07:20.918 Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 reinitialization... 00:07:20.918 spdk_app_start is called in Round 1. 00:07:20.918 Shutdown signal received, stop current app iteration 00:07:20.918 Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 reinitialization... 00:07:20.918 spdk_app_start is called in Round 2. 00:07:20.918 Shutdown signal received, stop current app iteration 00:07:20.918 Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 reinitialization... 00:07:20.918 spdk_app_start is called in Round 3. 00:07:20.918 Shutdown signal received, stop current app iteration 00:07:20.918 03:16:08 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:20.918 03:16:08 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:20.918 00:07:20.918 real 0m18.468s 00:07:20.918 user 0m40.824s 00:07:20.918 sys 0m3.191s 00:07:20.918 03:16:08 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.918 03:16:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:20.918 ************************************ 00:07:20.918 END TEST app_repeat 00:07:20.918 ************************************ 00:07:20.918 03:16:08 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:20.918 03:16:08 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:20.918 03:16:08 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.918 03:16:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.918 03:16:08 event -- common/autotest_common.sh@10 -- # set +x 00:07:20.918 ************************************ 00:07:20.918 START TEST cpu_locks 00:07:20.918 ************************************ 00:07:20.918 03:16:08 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:20.918 * Looking for test storage... 00:07:20.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:20.918 03:16:08 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:20.918 03:16:08 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:07:20.918 03:16:08 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:20.919 03:16:08 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:20.919 03:16:08 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.919 03:16:08 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.919 03:16:08 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.919 03:16:08 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.919 03:16:08 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.919 03:16:08 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.919 03:16:08 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.919 03:16:08 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.919 03:16:08 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.919 03:16:08 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.919 03:16:08 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:20.919 03:16:08 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:20.919 03:16:08 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:20.919 03:16:08 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:20.919 03:16:08 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.919 03:16:08 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:20.919 03:16:08 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:20.919 03:16:08 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.919 03:16:08 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:20.919 03:16:08 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:20.919 03:16:08 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:20.919 03:16:08 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:20.919 03:16:08 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.919 03:16:08 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:20.919 03:16:08 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:20.919 03:16:08 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:20.919 03:16:08 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:20.919 03:16:08 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:20.919 03:16:08 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.919 03:16:08 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:20.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.919 --rc genhtml_branch_coverage=1 00:07:20.919 --rc genhtml_function_coverage=1 00:07:20.919 --rc genhtml_legend=1 00:07:20.919 --rc geninfo_all_blocks=1 00:07:20.919 --rc geninfo_unexecuted_blocks=1 00:07:20.919 00:07:20.919 ' 00:07:20.919 03:16:08 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:20.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.919 --rc genhtml_branch_coverage=1 00:07:20.919 --rc genhtml_function_coverage=1 00:07:20.919 --rc genhtml_legend=1 00:07:20.919 --rc geninfo_all_blocks=1 00:07:20.919 --rc geninfo_unexecuted_blocks=1 00:07:20.919 00:07:20.919 ' 00:07:20.919 03:16:08 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:20.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.919 --rc genhtml_branch_coverage=1 00:07:20.919 --rc genhtml_function_coverage=1 00:07:20.919 --rc genhtml_legend=1 00:07:20.919 --rc geninfo_all_blocks=1 00:07:20.919 --rc geninfo_unexecuted_blocks=1 00:07:20.919 00:07:20.919 ' 00:07:20.919 03:16:08 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:20.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.919 --rc genhtml_branch_coverage=1 00:07:20.919 --rc genhtml_function_coverage=1 00:07:20.919 --rc genhtml_legend=1 00:07:20.919 --rc geninfo_all_blocks=1 00:07:20.919 --rc geninfo_unexecuted_blocks=1 00:07:20.919 00:07:20.919 ' 00:07:20.919 03:16:08 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:20.919 03:16:08 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:20.919 03:16:08 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:20.919 03:16:08 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:20.919 03:16:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.919 03:16:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.919 03:16:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.919 ************************************ 00:07:20.919 START TEST default_locks 00:07:20.919 ************************************ 00:07:20.919 03:16:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:21.185 03:16:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=72737 00:07:21.185 03:16:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:21.185 03:16:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 72737 00:07:21.185 03:16:08 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 72737 ']' 00:07:21.185 03:16:08 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.185 03:16:08 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.185 03:16:08 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.185 03:16:08 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.185 03:16:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:21.185 [2024-11-21 03:16:08.583074] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:07:21.185 [2024-11-21 03:16:08.583213] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72737 ] 00:07:21.185 [2024-11-21 03:16:08.723836] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:21.443 [2024-11-21 03:16:08.753210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.443 [2024-11-21 03:16:08.786951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.376 03:16:09 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.376 03:16:09 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:22.376 03:16:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 72737 00:07:22.376 03:16:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 72737 00:07:22.376 03:16:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:22.634 03:16:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 72737 00:07:22.634 03:16:09 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 72737 ']' 00:07:22.634 03:16:09 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 72737 00:07:22.634 03:16:09 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:22.634 03:16:09 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.634 03:16:09 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72737 00:07:22.634 killing process with pid 72737 00:07:22.634 03:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:22.634 03:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:22.634 03:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72737' 00:07:22.634 03:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 72737 00:07:22.634 03:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 72737 00:07:22.892 03:16:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 72737 00:07:22.892 03:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:22.892 03:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72737 00:07:22.892 03:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:22.892 03:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.892 03:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:22.892 03:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.892 03:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 72737 00:07:22.892 03:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 72737 ']' 00:07:22.892 03:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.892 03:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.892 03:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.892 ERROR: process (pid: 72737) is no longer running 00:07:22.892 03:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.892 03:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.892 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72737) - No such process 00:07:22.892 03:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.892 03:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:22.892 03:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:22.892 03:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:22.892 03:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:22.892 03:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:22.892 03:16:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:22.892 03:16:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:22.892 03:16:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:22.892 03:16:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:22.892 00:07:22.892 real 0m1.963s 00:07:22.892 user 0m2.134s 00:07:22.892 sys 0m0.605s 00:07:22.892 03:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.892 03:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.893 ************************************ 00:07:22.893 END TEST default_locks 00:07:22.893 ************************************ 00:07:23.151 03:16:10 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:23.151 03:16:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.151 03:16:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.151 03:16:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:23.151 ************************************ 00:07:23.151 START TEST default_locks_via_rpc 00:07:23.151 ************************************ 00:07:23.151 03:16:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:23.151 03:16:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=72790 00:07:23.151 03:16:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:23.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.151 03:16:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 72790 00:07:23.151 03:16:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72790 ']' 00:07:23.151 03:16:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.151 03:16:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.151 03:16:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.151 03:16:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.151 03:16:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.151 [2024-11-21 03:16:10.610593] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:07:23.151 [2024-11-21 03:16:10.610821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72790 ] 00:07:23.408 [2024-11-21 03:16:10.753386] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:23.408 [2024-11-21 03:16:10.792412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.408 [2024-11-21 03:16:10.834491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.974 03:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.974 03:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:23.974 03:16:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:23.974 03:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.974 03:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.974 03:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.975 03:16:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:23.975 03:16:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:23.975 03:16:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:23.975 03:16:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:23.975 03:16:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:23.975 03:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.975 03:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.975 03:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.975 03:16:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 72790 00:07:23.975 03:16:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 72790 00:07:23.975 03:16:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:24.233 03:16:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 72790 00:07:24.233 03:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 72790 ']' 00:07:24.233 03:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 72790 00:07:24.233 03:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:24.233 03:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.233 03:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72790 00:07:24.233 killing process with pid 72790 00:07:24.233 03:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:24.233 03:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:24.233 03:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72790' 00:07:24.233 03:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 72790 00:07:24.233 03:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 72790 00:07:24.800 00:07:24.800 real 0m1.710s 00:07:24.800 user 0m1.476s 00:07:24.800 sys 0m0.793s 00:07:24.800 03:16:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.800 03:16:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.800 ************************************ 00:07:24.801 END TEST default_locks_via_rpc 00:07:24.801 ************************************ 00:07:24.801 03:16:12 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:24.801 03:16:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:24.801 03:16:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.801 03:16:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.801 ************************************ 00:07:24.801 START TEST non_locking_app_on_locked_coremask 00:07:24.801 ************************************ 00:07:24.801 03:16:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:24.801 03:16:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:24.801 03:16:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=72842 00:07:24.801 03:16:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 72842 /var/tmp/spdk.sock 00:07:24.801 03:16:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72842 ']' 00:07:24.801 03:16:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.801 03:16:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.801 03:16:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.801 03:16:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.801 03:16:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.059 [2024-11-21 03:16:12.385769] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:07:25.059 [2024-11-21 03:16:12.386078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72842 ] 00:07:25.059 [2024-11-21 03:16:12.531114] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:25.059 [2024-11-21 03:16:12.571866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.059 [2024-11-21 03:16:12.605644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.992 03:16:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.992 03:16:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:25.992 03:16:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=72858 00:07:25.992 03:16:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:25.992 03:16:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 72858 /var/tmp/spdk2.sock 00:07:25.992 03:16:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72858 ']' 00:07:25.992 03:16:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:25.992 03:16:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.992 03:16:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:25.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:25.992 03:16:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.992 03:16:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.992 [2024-11-21 03:16:13.419316] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:07:25.992 [2024-11-21 03:16:13.419584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72858 ] 00:07:26.250 [2024-11-21 03:16:13.560531] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:26.250 [2024-11-21 03:16:13.607104] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:26.250 [2024-11-21 03:16:13.607190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.250 [2024-11-21 03:16:13.673487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.816 03:16:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.816 03:16:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:26.816 03:16:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 72842 00:07:26.816 03:16:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:26.816 03:16:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72842 00:07:27.750 03:16:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 72842 00:07:27.750 03:16:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72842 ']' 00:07:27.750 03:16:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72842 00:07:27.750 03:16:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:27.750 03:16:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.750 03:16:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72842 00:07:27.750 killing process with pid 72842 00:07:27.750 03:16:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:27.750 03:16:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:27.750 03:16:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72842' 00:07:27.750 03:16:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72842 00:07:27.750 03:16:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72842 00:07:28.686 03:16:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 72858 00:07:28.686 03:16:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72858 ']' 00:07:28.686 03:16:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72858 00:07:28.686 03:16:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:28.686 03:16:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.686 03:16:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72858 00:07:28.686 killing process with pid 72858 00:07:28.686 03:16:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.686 03:16:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.686 03:16:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72858' 00:07:28.686 03:16:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72858 00:07:28.686 03:16:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72858 00:07:29.254 ************************************ 00:07:29.254 END TEST non_locking_app_on_locked_coremask 00:07:29.254 ************************************ 00:07:29.254 00:07:29.254 real 0m4.247s 00:07:29.254 user 0m4.496s 00:07:29.254 sys 0m1.399s 00:07:29.254 03:16:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.254 03:16:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:29.254 03:16:16 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:29.254 03:16:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:29.254 03:16:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.254 03:16:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:29.254 ************************************ 00:07:29.254 START TEST locking_app_on_unlocked_coremask 00:07:29.254 ************************************ 00:07:29.254 03:16:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:29.254 03:16:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=72927 00:07:29.254 03:16:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 72927 /var/tmp/spdk.sock 00:07:29.254 03:16:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:29.254 03:16:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72927 ']' 00:07:29.254 03:16:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.254 03:16:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.254 03:16:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.254 03:16:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.254 03:16:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:29.254 [2024-11-21 03:16:16.698298] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:07:29.254 [2024-11-21 03:16:16.698542] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72927 ] 00:07:29.513 [2024-11-21 03:16:16.840419] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:29.513 [2024-11-21 03:16:16.878965] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:29.513 [2024-11-21 03:16:16.879072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.513 [2024-11-21 03:16:16.916833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.080 03:16:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.080 03:16:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:30.080 03:16:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=72943 00:07:30.080 03:16:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:30.080 03:16:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 72943 /var/tmp/spdk2.sock 00:07:30.080 03:16:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72943 ']' 00:07:30.080 03:16:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:30.080 03:16:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.080 03:16:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:30.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:30.080 03:16:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.080 03:16:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.338 [2024-11-21 03:16:17.694519] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:07:30.338 [2024-11-21 03:16:17.694750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72943 ] 00:07:30.338 [2024-11-21 03:16:17.838420] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:30.338 [2024-11-21 03:16:17.879926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.597 [2024-11-21 03:16:17.952294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.162 03:16:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.162 03:16:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:31.162 03:16:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 72943 00:07:31.162 03:16:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72943 00:07:31.162 03:16:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:31.738 03:16:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 72927 00:07:31.738 03:16:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72927 ']' 00:07:31.738 03:16:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72927 00:07:31.738 03:16:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:31.738 03:16:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:31.738 03:16:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72927 00:07:31.738 03:16:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:31.739 03:16:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:31.739 03:16:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72927' 00:07:31.739 killing process with pid 72927 00:07:31.739 03:16:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72927 00:07:31.739 03:16:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72927 00:07:32.672 03:16:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 72943 00:07:32.673 03:16:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72943 ']' 00:07:32.673 03:16:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72943 00:07:32.673 03:16:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:32.673 03:16:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.673 03:16:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72943 00:07:32.673 killing process with pid 72943 00:07:32.673 03:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:32.673 03:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:32.673 03:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72943' 00:07:32.673 03:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72943 00:07:32.673 03:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72943 00:07:32.931 ************************************ 00:07:32.931 END TEST locking_app_on_unlocked_coremask 00:07:32.931 ************************************ 00:07:32.931 00:07:32.931 real 0m3.855s 00:07:32.931 user 0m4.115s 00:07:32.931 sys 0m1.169s 00:07:32.931 03:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.931 03:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:33.189 03:16:20 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:33.189 03:16:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:33.189 03:16:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.189 03:16:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:33.189 ************************************ 00:07:33.189 START TEST locking_app_on_locked_coremask 00:07:33.189 ************************************ 00:07:33.189 03:16:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:33.189 03:16:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=73012 00:07:33.189 03:16:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:33.189 03:16:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 73012 /var/tmp/spdk.sock 00:07:33.189 03:16:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 73012 ']' 00:07:33.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.189 03:16:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.189 03:16:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:33.189 03:16:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.189 03:16:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:33.189 03:16:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:33.189 [2024-11-21 03:16:20.618099] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:07:33.189 [2024-11-21 03:16:20.618242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73012 ] 00:07:33.448 [2024-11-21 03:16:20.759261] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:33.448 [2024-11-21 03:16:20.796028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.448 [2024-11-21 03:16:20.829564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.014 03:16:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.014 03:16:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:34.014 03:16:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:34.014 03:16:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=73028 00:07:34.014 03:16:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 73028 /var/tmp/spdk2.sock 00:07:34.014 03:16:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:34.014 03:16:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 73028 /var/tmp/spdk2.sock 00:07:34.014 03:16:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:34.014 03:16:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.014 03:16:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:34.015 03:16:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.015 03:16:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 73028 /var/tmp/spdk2.sock 00:07:34.015 03:16:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 73028 ']' 00:07:34.015 03:16:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:34.015 03:16:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.015 03:16:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:34.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:34.015 03:16:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.015 03:16:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:34.273 [2024-11-21 03:16:21.655684] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:07:34.273 [2024-11-21 03:16:21.656011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73028 ] 00:07:34.273 [2024-11-21 03:16:21.813249] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:34.531 [2024-11-21 03:16:21.855471] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 73012 has claimed it. 00:07:34.531 [2024-11-21 03:16:21.855579] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:34.789 ERROR: process (pid: 73028) is no longer running 00:07:34.789 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (73028) - No such process 00:07:34.789 03:16:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.789 03:16:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:34.789 03:16:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:34.789 03:16:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:34.789 03:16:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:34.789 03:16:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:34.789 03:16:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 73012 00:07:34.789 03:16:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 73012 00:07:34.789 03:16:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:35.047 03:16:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 73012 00:07:35.047 03:16:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 73012 ']' 00:07:35.047 03:16:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 73012 00:07:35.047 03:16:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:35.047 03:16:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.047 03:16:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73012 00:07:35.047 03:16:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:35.047 03:16:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:35.047 03:16:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73012' 00:07:35.047 killing process with pid 73012 00:07:35.047 03:16:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 73012 00:07:35.047 03:16:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 73012 00:07:35.613 00:07:35.613 real 0m2.431s 00:07:35.613 user 0m2.634s 00:07:35.613 sys 0m0.784s 00:07:35.613 03:16:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.613 03:16:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:35.613 ************************************ 00:07:35.614 END TEST locking_app_on_locked_coremask 00:07:35.614 ************************************ 00:07:35.614 03:16:22 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:35.614 03:16:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.614 03:16:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.614 03:16:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:35.614 ************************************ 00:07:35.614 START TEST locking_overlapped_coremask 00:07:35.614 ************************************ 00:07:35.614 03:16:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:35.614 03:16:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=73070 00:07:35.614 03:16:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:35.614 03:16:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 73070 /var/tmp/spdk.sock 00:07:35.614 03:16:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 73070 ']' 00:07:35.614 03:16:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.614 03:16:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.614 03:16:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.614 03:16:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.614 03:16:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:35.614 [2024-11-21 03:16:23.108070] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:07:35.614 [2024-11-21 03:16:23.108216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73070 ] 00:07:35.872 [2024-11-21 03:16:23.250352] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:35.873 [2024-11-21 03:16:23.272866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:35.873 [2024-11-21 03:16:23.309493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.873 [2024-11-21 03:16:23.309710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.873 [2024-11-21 03:16:23.309817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.439 03:16:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.439 03:16:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:36.439 03:16:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:36.439 03:16:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=73088 00:07:36.439 03:16:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 73088 /var/tmp/spdk2.sock 00:07:36.439 03:16:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:36.439 03:16:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 73088 /var/tmp/spdk2.sock 00:07:36.439 03:16:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:36.439 03:16:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.439 03:16:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:36.697 03:16:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.697 03:16:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 73088 /var/tmp/spdk2.sock 00:07:36.697 03:16:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 73088 ']' 00:07:36.697 03:16:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:36.697 03:16:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.698 03:16:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:36.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:36.698 03:16:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.698 03:16:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.698 [2024-11-21 03:16:24.084775] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:07:36.698 [2024-11-21 03:16:24.085065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73088 ] 00:07:36.698 [2024-11-21 03:16:24.227484] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:36.698 [2024-11-21 03:16:24.259933] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 73070 has claimed it. 00:07:36.698 [2024-11-21 03:16:24.259993] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:37.264 ERROR: process (pid: 73088) is no longer running 00:07:37.264 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (73088) - No such process 00:07:37.264 03:16:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.264 03:16:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:37.264 03:16:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:37.264 03:16:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:37.264 03:16:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:37.264 03:16:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:37.264 03:16:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:37.264 03:16:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:37.264 03:16:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:37.264 03:16:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:37.264 03:16:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 73070 00:07:37.264 03:16:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 73070 ']' 00:07:37.264 03:16:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 73070 00:07:37.264 03:16:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:37.264 03:16:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.264 03:16:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73070 00:07:37.264 03:16:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:37.264 03:16:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:37.264 03:16:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73070' 00:07:37.264 killing process with pid 73070 00:07:37.264 03:16:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 73070 00:07:37.264 03:16:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 73070 00:07:37.830 00:07:37.830 real 0m2.259s 00:07:37.830 user 0m5.944s 00:07:37.830 sys 0m0.630s 00:07:37.830 03:16:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.830 03:16:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:37.830 ************************************ 00:07:37.830 END TEST locking_overlapped_coremask 00:07:37.830 ************************************ 00:07:37.830 03:16:25 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:37.830 03:16:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:37.830 03:16:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.830 03:16:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:37.830 ************************************ 00:07:37.830 START TEST locking_overlapped_coremask_via_rpc 00:07:37.830 ************************************ 00:07:37.830 03:16:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:37.830 03:16:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:37.830 03:16:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=73130 00:07:37.830 03:16:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 73130 /var/tmp/spdk.sock 00:07:37.830 03:16:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 73130 ']' 00:07:37.830 03:16:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.830 03:16:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.830 03:16:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.830 03:16:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.830 03:16:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.088 [2024-11-21 03:16:25.454968] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:07:38.088 [2024-11-21 03:16:25.455205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73130 ] 00:07:38.088 [2024-11-21 03:16:25.607927] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:38.088 [2024-11-21 03:16:25.647894] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:38.088 [2024-11-21 03:16:25.647963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:38.346 [2024-11-21 03:16:25.684227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.346 [2024-11-21 03:16:25.684268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.346 [2024-11-21 03:16:25.684390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.911 03:16:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.911 03:16:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:38.911 03:16:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:38.911 03:16:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=73148 00:07:38.911 03:16:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 73148 /var/tmp/spdk2.sock 00:07:38.911 03:16:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 73148 ']' 00:07:38.911 03:16:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:38.911 03:16:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.911 03:16:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:38.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:38.911 03:16:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.911 03:16:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.911 [2024-11-21 03:16:26.386030] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:07:38.911 [2024-11-21 03:16:26.386302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73148 ] 00:07:39.194 [2024-11-21 03:16:26.533637] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:39.194 [2024-11-21 03:16:26.568409] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:39.194 [2024-11-21 03:16:26.568460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:39.194 [2024-11-21 03:16:26.663606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.194 [2024-11-21 03:16:26.663708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:39.194 [2024-11-21 03:16:26.663651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.141 [2024-11-21 03:16:27.395255] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 73130 has claimed it. 00:07:40.141 request: 00:07:40.141 { 00:07:40.141 "method": "framework_enable_cpumask_locks", 00:07:40.141 "req_id": 1 00:07:40.141 } 00:07:40.141 Got JSON-RPC error response 00:07:40.141 response: 00:07:40.141 { 00:07:40.141 "code": -32603, 00:07:40.141 "message": "Failed to claim CPU core: 2" 00:07:40.141 } 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 73130 /var/tmp/spdk.sock 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 73130 ']' 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 73148 /var/tmp/spdk2.sock 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 73148 ']' 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:40.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:40.141 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.142 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.400 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.400 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:40.400 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:40.400 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:40.400 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:40.400 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:40.400 00:07:40.400 real 0m2.543s 00:07:40.400 user 0m1.165s 00:07:40.400 sys 0m0.205s 00:07:40.400 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.400 03:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.400 ************************************ 00:07:40.400 END TEST locking_overlapped_coremask_via_rpc 00:07:40.400 ************************************ 00:07:40.400 03:16:27 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:40.400 03:16:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 73130 ]] 00:07:40.400 03:16:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 73130 00:07:40.400 03:16:27 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 73130 ']' 00:07:40.400 03:16:27 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 73130 00:07:40.400 03:16:27 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:40.400 03:16:27 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.400 03:16:27 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73130 00:07:40.400 03:16:27 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.400 killing process with pid 73130 00:07:40.401 03:16:27 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.401 03:16:27 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73130' 00:07:40.401 03:16:27 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 73130 00:07:40.401 03:16:27 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 73130 00:07:41.337 03:16:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 73148 ]] 00:07:41.337 03:16:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 73148 00:07:41.337 03:16:28 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 73148 ']' 00:07:41.337 03:16:28 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 73148 00:07:41.337 03:16:28 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:41.337 03:16:28 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.337 03:16:28 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73148 00:07:41.337 killing process with pid 73148 00:07:41.337 03:16:28 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:41.337 03:16:28 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:41.337 03:16:28 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73148' 00:07:41.337 03:16:28 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 73148 00:07:41.337 03:16:28 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 73148 00:07:41.902 03:16:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:41.902 Process with pid 73130 is not found 00:07:41.902 03:16:29 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:41.902 03:16:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 73130 ]] 00:07:41.902 03:16:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 73130 00:07:41.902 03:16:29 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 73130 ']' 00:07:41.902 03:16:29 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 73130 00:07:41.902 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73130) - No such process 00:07:41.902 03:16:29 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 73130 is not found' 00:07:41.902 03:16:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 73148 ]] 00:07:41.902 03:16:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 73148 00:07:41.902 03:16:29 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 73148 ']' 00:07:41.902 03:16:29 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 73148 00:07:41.902 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73148) - No such process 00:07:41.902 03:16:29 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 73148 is not found' 00:07:41.902 Process with pid 73148 is not found 00:07:41.902 03:16:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:41.902 00:07:41.902 real 0m21.042s 00:07:41.902 user 0m35.951s 00:07:41.902 sys 0m6.941s 00:07:41.902 03:16:29 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.902 03:16:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:41.902 ************************************ 00:07:41.902 END TEST cpu_locks 00:07:41.902 ************************************ 00:07:41.902 00:07:41.902 real 0m50.311s 00:07:41.902 user 1m36.731s 00:07:41.902 sys 0m11.410s 00:07:41.902 03:16:29 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.902 03:16:29 event -- common/autotest_common.sh@10 -- # set +x 00:07:41.902 ************************************ 00:07:41.902 END TEST event 00:07:41.902 ************************************ 00:07:41.902 03:16:29 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:41.902 03:16:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:41.902 03:16:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.902 03:16:29 -- common/autotest_common.sh@10 -- # set +x 00:07:41.902 ************************************ 00:07:41.902 START TEST thread 00:07:41.902 ************************************ 00:07:41.902 03:16:29 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:42.160 * Looking for test storage... 00:07:42.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:42.160 03:16:29 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:42.160 03:16:29 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:42.160 03:16:29 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:42.160 03:16:29 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:42.160 03:16:29 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.160 03:16:29 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.160 03:16:29 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.160 03:16:29 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.160 03:16:29 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.160 03:16:29 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.160 03:16:29 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.160 03:16:29 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.160 03:16:29 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.160 03:16:29 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.160 03:16:29 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.160 03:16:29 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:42.160 03:16:29 thread -- scripts/common.sh@345 -- # : 1 00:07:42.160 03:16:29 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.160 03:16:29 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.160 03:16:29 thread -- scripts/common.sh@365 -- # decimal 1 00:07:42.160 03:16:29 thread -- scripts/common.sh@353 -- # local d=1 00:07:42.160 03:16:29 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.160 03:16:29 thread -- scripts/common.sh@355 -- # echo 1 00:07:42.160 03:16:29 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.160 03:16:29 thread -- scripts/common.sh@366 -- # decimal 2 00:07:42.160 03:16:29 thread -- scripts/common.sh@353 -- # local d=2 00:07:42.160 03:16:29 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.160 03:16:29 thread -- scripts/common.sh@355 -- # echo 2 00:07:42.160 03:16:29 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.160 03:16:29 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.160 03:16:29 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.160 03:16:29 thread -- scripts/common.sh@368 -- # return 0 00:07:42.160 03:16:29 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.160 03:16:29 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:42.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.160 --rc genhtml_branch_coverage=1 00:07:42.160 --rc genhtml_function_coverage=1 00:07:42.160 --rc genhtml_legend=1 00:07:42.160 --rc geninfo_all_blocks=1 00:07:42.160 --rc geninfo_unexecuted_blocks=1 00:07:42.160 00:07:42.160 ' 00:07:42.160 03:16:29 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:42.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.160 --rc genhtml_branch_coverage=1 00:07:42.160 --rc genhtml_function_coverage=1 00:07:42.160 --rc genhtml_legend=1 00:07:42.160 --rc geninfo_all_blocks=1 00:07:42.160 --rc geninfo_unexecuted_blocks=1 00:07:42.160 00:07:42.160 ' 00:07:42.160 03:16:29 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:42.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.160 --rc genhtml_branch_coverage=1 00:07:42.160 --rc genhtml_function_coverage=1 00:07:42.160 --rc genhtml_legend=1 00:07:42.160 --rc geninfo_all_blocks=1 00:07:42.160 --rc geninfo_unexecuted_blocks=1 00:07:42.160 00:07:42.160 ' 00:07:42.160 03:16:29 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:42.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.160 --rc genhtml_branch_coverage=1 00:07:42.160 --rc genhtml_function_coverage=1 00:07:42.160 --rc genhtml_legend=1 00:07:42.160 --rc geninfo_all_blocks=1 00:07:42.160 --rc geninfo_unexecuted_blocks=1 00:07:42.160 00:07:42.160 ' 00:07:42.160 03:16:29 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:42.160 03:16:29 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:42.160 03:16:29 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.160 03:16:29 thread -- common/autotest_common.sh@10 -- # set +x 00:07:42.160 ************************************ 00:07:42.160 START TEST thread_poller_perf 00:07:42.160 ************************************ 00:07:42.161 03:16:29 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:42.161 [2024-11-21 03:16:29.701425] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:07:42.161 [2024-11-21 03:16:29.701666] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73297 ] 00:07:42.419 [2024-11-21 03:16:29.840690] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:42.420 [2024-11-21 03:16:29.876289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.420 [2024-11-21 03:16:29.920443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.420 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:43.794 [2024-11-21T03:16:31.360Z] ====================================== 00:07:43.794 [2024-11-21T03:16:31.360Z] busy:2304611498 (cyc) 00:07:43.794 [2024-11-21T03:16:31.360Z] total_run_count: 311000 00:07:43.794 [2024-11-21T03:16:31.360Z] tsc_hz: 2294600000 (cyc) 00:07:43.794 [2024-11-21T03:16:31.360Z] ====================================== 00:07:43.794 [2024-11-21T03:16:31.360Z] poller_cost: 7410 (cyc), 3229 (nsec) 00:07:43.794 ************************************ 00:07:43.794 END TEST thread_poller_perf 00:07:43.794 ************************************ 00:07:43.794 00:07:43.794 real 0m1.340s 00:07:43.794 user 0m1.124s 00:07:43.794 sys 0m0.109s 00:07:43.794 03:16:30 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.794 03:16:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:43.794 03:16:31 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:43.794 03:16:31 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:43.794 03:16:31 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.794 03:16:31 thread -- common/autotest_common.sh@10 -- # set +x 00:07:43.794 ************************************ 00:07:43.794 START TEST thread_poller_perf 00:07:43.794 ************************************ 00:07:43.794 03:16:31 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:43.794 [2024-11-21 03:16:31.107791] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:07:43.794 [2024-11-21 03:16:31.108054] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73328 ] 00:07:43.794 [2024-11-21 03:16:31.246968] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:43.794 [2024-11-21 03:16:31.285260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.794 [2024-11-21 03:16:31.318827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.794 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:45.170 [2024-11-21T03:16:32.736Z] ====================================== 00:07:45.170 [2024-11-21T03:16:32.736Z] busy:2299086844 (cyc) 00:07:45.170 [2024-11-21T03:16:32.736Z] total_run_count: 4001000 00:07:45.170 [2024-11-21T03:16:32.736Z] tsc_hz: 2294600000 (cyc) 00:07:45.170 [2024-11-21T03:16:32.736Z] ====================================== 00:07:45.170 [2024-11-21T03:16:32.736Z] poller_cost: 574 (cyc), 250 (nsec) 00:07:45.170 00:07:45.170 real 0m1.340s 00:07:45.170 user 0m1.124s 00:07:45.170 sys 0m0.106s 00:07:45.170 03:16:32 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.171 03:16:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:45.171 ************************************ 00:07:45.171 END TEST thread_poller_perf 00:07:45.171 ************************************ 00:07:45.171 03:16:32 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:45.171 00:07:45.171 real 0m3.034s 00:07:45.171 user 0m2.398s 00:07:45.171 sys 0m0.436s 00:07:45.171 03:16:32 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.171 03:16:32 thread -- common/autotest_common.sh@10 -- # set +x 00:07:45.171 ************************************ 00:07:45.171 END TEST thread 00:07:45.171 ************************************ 00:07:45.171 03:16:32 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:45.171 03:16:32 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:45.171 03:16:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.171 03:16:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.171 03:16:32 -- common/autotest_common.sh@10 -- # set +x 00:07:45.171 ************************************ 00:07:45.171 START TEST app_cmdline 00:07:45.171 ************************************ 00:07:45.171 03:16:32 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:45.171 * Looking for test storage... 00:07:45.171 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:45.171 03:16:32 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:45.171 03:16:32 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:45.171 03:16:32 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:45.171 03:16:32 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:45.171 03:16:32 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.171 03:16:32 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.171 03:16:32 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.171 03:16:32 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.171 03:16:32 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.171 03:16:32 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.171 03:16:32 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.171 03:16:32 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.171 03:16:32 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.171 03:16:32 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.171 03:16:32 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.171 03:16:32 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:45.171 03:16:32 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:45.171 03:16:32 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.171 03:16:32 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.171 03:16:32 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:45.430 03:16:32 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:45.430 03:16:32 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.430 03:16:32 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:45.430 03:16:32 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.430 03:16:32 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:45.430 03:16:32 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:45.430 03:16:32 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.430 03:16:32 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:45.430 03:16:32 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.430 03:16:32 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.430 03:16:32 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.430 03:16:32 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:45.430 03:16:32 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.430 03:16:32 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:45.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.430 --rc genhtml_branch_coverage=1 00:07:45.430 --rc genhtml_function_coverage=1 00:07:45.430 --rc genhtml_legend=1 00:07:45.430 --rc geninfo_all_blocks=1 00:07:45.430 --rc geninfo_unexecuted_blocks=1 00:07:45.430 00:07:45.430 ' 00:07:45.430 03:16:32 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:45.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.430 --rc genhtml_branch_coverage=1 00:07:45.430 --rc genhtml_function_coverage=1 00:07:45.430 --rc genhtml_legend=1 00:07:45.430 --rc geninfo_all_blocks=1 00:07:45.430 --rc geninfo_unexecuted_blocks=1 00:07:45.430 00:07:45.430 ' 00:07:45.430 03:16:32 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:45.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.430 --rc genhtml_branch_coverage=1 00:07:45.430 --rc genhtml_function_coverage=1 00:07:45.430 --rc genhtml_legend=1 00:07:45.430 --rc geninfo_all_blocks=1 00:07:45.430 --rc geninfo_unexecuted_blocks=1 00:07:45.430 00:07:45.430 ' 00:07:45.430 03:16:32 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:45.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.430 --rc genhtml_branch_coverage=1 00:07:45.430 --rc genhtml_function_coverage=1 00:07:45.430 --rc genhtml_legend=1 00:07:45.430 --rc geninfo_all_blocks=1 00:07:45.430 --rc geninfo_unexecuted_blocks=1 00:07:45.430 00:07:45.430 ' 00:07:45.430 03:16:32 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:45.430 03:16:32 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=73416 00:07:45.430 03:16:32 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:45.430 03:16:32 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 73416 00:07:45.430 03:16:32 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 73416 ']' 00:07:45.430 03:16:32 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.430 03:16:32 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.430 03:16:32 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.430 03:16:32 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.430 03:16:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:45.430 [2024-11-21 03:16:32.861562] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:07:45.430 [2024-11-21 03:16:32.861816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73416 ] 00:07:45.689 [2024-11-21 03:16:33.005740] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:45.689 [2024-11-21 03:16:33.047689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.689 [2024-11-21 03:16:33.081236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.256 03:16:33 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:46.256 03:16:33 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:46.256 03:16:33 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:46.518 { 00:07:46.518 "version": "SPDK v25.01-pre git sha1 557f022f6", 00:07:46.518 "fields": { 00:07:46.518 "major": 25, 00:07:46.518 "minor": 1, 00:07:46.518 "patch": 0, 00:07:46.518 "suffix": "-pre", 00:07:46.518 "commit": "557f022f6" 00:07:46.518 } 00:07:46.518 } 00:07:46.518 03:16:33 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:46.518 03:16:33 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:46.518 03:16:33 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:46.518 03:16:33 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:46.518 03:16:33 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:46.518 03:16:33 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:46.518 03:16:33 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.518 03:16:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:46.518 03:16:33 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:46.518 03:16:33 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.518 03:16:33 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:46.518 03:16:33 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:46.518 03:16:33 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:46.518 03:16:33 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:46.518 03:16:33 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:46.518 03:16:33 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:46.518 03:16:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:46.518 03:16:33 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:46.518 03:16:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:46.519 03:16:33 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:46.519 03:16:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:46.519 03:16:33 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:46.519 03:16:33 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:46.519 03:16:33 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:46.777 request: 00:07:46.777 { 00:07:46.777 "method": "env_dpdk_get_mem_stats", 00:07:46.777 "req_id": 1 00:07:46.777 } 00:07:46.777 Got JSON-RPC error response 00:07:46.777 response: 00:07:46.777 { 00:07:46.777 "code": -32601, 00:07:46.777 "message": "Method not found" 00:07:46.777 } 00:07:46.778 03:16:34 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:46.778 03:16:34 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:46.778 03:16:34 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:46.778 03:16:34 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:46.778 03:16:34 app_cmdline -- app/cmdline.sh@1 -- # killprocess 73416 00:07:46.778 03:16:34 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 73416 ']' 00:07:46.778 03:16:34 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 73416 00:07:46.778 03:16:34 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:46.778 03:16:34 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.778 03:16:34 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73416 00:07:46.778 03:16:34 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:46.778 03:16:34 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:46.778 03:16:34 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73416' 00:07:46.778 killing process with pid 73416 00:07:46.778 03:16:34 app_cmdline -- common/autotest_common.sh@973 -- # kill 73416 00:07:46.778 03:16:34 app_cmdline -- common/autotest_common.sh@978 -- # wait 73416 00:07:47.351 00:07:47.351 real 0m2.132s 00:07:47.351 user 0m2.382s 00:07:47.351 sys 0m0.642s 00:07:47.351 03:16:34 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.351 03:16:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:47.351 ************************************ 00:07:47.351 END TEST app_cmdline 00:07:47.351 ************************************ 00:07:47.351 03:16:34 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:47.351 03:16:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.351 03:16:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.351 03:16:34 -- common/autotest_common.sh@10 -- # set +x 00:07:47.351 ************************************ 00:07:47.351 START TEST version 00:07:47.351 ************************************ 00:07:47.351 03:16:34 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:47.351 * Looking for test storage... 00:07:47.351 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:47.351 03:16:34 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:47.351 03:16:34 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:47.351 03:16:34 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:47.620 03:16:34 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:47.620 03:16:34 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:47.620 03:16:34 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:47.620 03:16:34 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:47.620 03:16:34 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.620 03:16:34 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:47.620 03:16:34 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:47.620 03:16:34 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:47.620 03:16:34 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:47.620 03:16:34 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:47.620 03:16:34 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:47.620 03:16:34 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:47.620 03:16:34 version -- scripts/common.sh@344 -- # case "$op" in 00:07:47.620 03:16:34 version -- scripts/common.sh@345 -- # : 1 00:07:47.620 03:16:34 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:47.620 03:16:34 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.620 03:16:34 version -- scripts/common.sh@365 -- # decimal 1 00:07:47.620 03:16:34 version -- scripts/common.sh@353 -- # local d=1 00:07:47.620 03:16:34 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.620 03:16:34 version -- scripts/common.sh@355 -- # echo 1 00:07:47.620 03:16:34 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:47.620 03:16:34 version -- scripts/common.sh@366 -- # decimal 2 00:07:47.621 03:16:34 version -- scripts/common.sh@353 -- # local d=2 00:07:47.621 03:16:34 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.621 03:16:34 version -- scripts/common.sh@355 -- # echo 2 00:07:47.621 03:16:34 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:47.621 03:16:34 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:47.621 03:16:34 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:47.621 03:16:34 version -- scripts/common.sh@368 -- # return 0 00:07:47.621 03:16:34 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.621 03:16:34 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:47.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.621 --rc genhtml_branch_coverage=1 00:07:47.621 --rc genhtml_function_coverage=1 00:07:47.621 --rc genhtml_legend=1 00:07:47.621 --rc geninfo_all_blocks=1 00:07:47.621 --rc geninfo_unexecuted_blocks=1 00:07:47.621 00:07:47.621 ' 00:07:47.621 03:16:34 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:47.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.621 --rc genhtml_branch_coverage=1 00:07:47.621 --rc genhtml_function_coverage=1 00:07:47.621 --rc genhtml_legend=1 00:07:47.621 --rc geninfo_all_blocks=1 00:07:47.621 --rc geninfo_unexecuted_blocks=1 00:07:47.621 00:07:47.621 ' 00:07:47.621 03:16:34 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:47.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.621 --rc genhtml_branch_coverage=1 00:07:47.621 --rc genhtml_function_coverage=1 00:07:47.621 --rc genhtml_legend=1 00:07:47.621 --rc geninfo_all_blocks=1 00:07:47.621 --rc geninfo_unexecuted_blocks=1 00:07:47.621 00:07:47.621 ' 00:07:47.621 03:16:34 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:47.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.621 --rc genhtml_branch_coverage=1 00:07:47.621 --rc genhtml_function_coverage=1 00:07:47.621 --rc genhtml_legend=1 00:07:47.621 --rc geninfo_all_blocks=1 00:07:47.621 --rc geninfo_unexecuted_blocks=1 00:07:47.621 00:07:47.621 ' 00:07:47.621 03:16:34 version -- app/version.sh@17 -- # get_header_version major 00:07:47.621 03:16:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:47.621 03:16:34 version -- app/version.sh@14 -- # tr -d '"' 00:07:47.621 03:16:34 version -- app/version.sh@14 -- # cut -f2 00:07:47.621 03:16:34 version -- app/version.sh@17 -- # major=25 00:07:47.621 03:16:34 version -- app/version.sh@18 -- # get_header_version minor 00:07:47.621 03:16:34 version -- app/version.sh@14 -- # tr -d '"' 00:07:47.621 03:16:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:47.621 03:16:34 version -- app/version.sh@14 -- # cut -f2 00:07:47.621 03:16:34 version -- app/version.sh@18 -- # minor=1 00:07:47.621 03:16:34 version -- app/version.sh@19 -- # get_header_version patch 00:07:47.621 03:16:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:47.621 03:16:34 version -- app/version.sh@14 -- # cut -f2 00:07:47.621 03:16:34 version -- app/version.sh@14 -- # tr -d '"' 00:07:47.621 03:16:34 version -- app/version.sh@19 -- # patch=0 00:07:47.621 03:16:34 version -- app/version.sh@20 -- # get_header_version suffix 00:07:47.621 03:16:34 version -- app/version.sh@14 -- # cut -f2 00:07:47.621 03:16:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:47.621 03:16:34 version -- app/version.sh@14 -- # tr -d '"' 00:07:47.621 03:16:34 version -- app/version.sh@20 -- # suffix=-pre 00:07:47.621 03:16:34 version -- app/version.sh@22 -- # version=25.1 00:07:47.621 03:16:34 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:47.621 03:16:34 version -- app/version.sh@28 -- # version=25.1rc0 00:07:47.621 03:16:34 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:47.621 03:16:34 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:47.621 03:16:35 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:47.621 03:16:35 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:47.621 ************************************ 00:07:47.621 END TEST version 00:07:47.621 ************************************ 00:07:47.621 00:07:47.621 real 0m0.331s 00:07:47.621 user 0m0.208s 00:07:47.621 sys 0m0.167s 00:07:47.621 03:16:35 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.621 03:16:35 version -- common/autotest_common.sh@10 -- # set +x 00:07:47.621 03:16:35 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:47.621 03:16:35 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:07:47.621 03:16:35 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:47.621 03:16:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.621 03:16:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.621 03:16:35 -- common/autotest_common.sh@10 -- # set +x 00:07:47.621 ************************************ 00:07:47.621 START TEST bdev_raid 00:07:47.621 ************************************ 00:07:47.621 03:16:35 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:47.879 * Looking for test storage... 00:07:47.879 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:47.879 03:16:35 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:47.879 03:16:35 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:07:47.879 03:16:35 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:47.879 03:16:35 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:47.879 03:16:35 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:47.879 03:16:35 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:47.879 03:16:35 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:47.879 03:16:35 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.879 03:16:35 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:07:47.879 03:16:35 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:07:47.879 03:16:35 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:07:47.879 03:16:35 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:07:47.879 03:16:35 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:07:47.879 03:16:35 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:07:47.879 03:16:35 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:47.879 03:16:35 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:07:47.879 03:16:35 bdev_raid -- scripts/common.sh@345 -- # : 1 00:07:47.879 03:16:35 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:47.879 03:16:35 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.879 03:16:35 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:07:47.879 03:16:35 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:07:47.879 03:16:35 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.879 03:16:35 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:07:47.879 03:16:35 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:07:47.879 03:16:35 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:07:47.879 03:16:35 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:07:47.879 03:16:35 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.879 03:16:35 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:07:47.879 03:16:35 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:07:47.879 03:16:35 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:47.879 03:16:35 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:47.879 03:16:35 bdev_raid -- scripts/common.sh@368 -- # return 0 00:07:47.879 03:16:35 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.879 03:16:35 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:47.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.879 --rc genhtml_branch_coverage=1 00:07:47.879 --rc genhtml_function_coverage=1 00:07:47.879 --rc genhtml_legend=1 00:07:47.879 --rc geninfo_all_blocks=1 00:07:47.879 --rc geninfo_unexecuted_blocks=1 00:07:47.879 00:07:47.879 ' 00:07:47.879 03:16:35 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:47.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.879 --rc genhtml_branch_coverage=1 00:07:47.879 --rc genhtml_function_coverage=1 00:07:47.879 --rc genhtml_legend=1 00:07:47.879 --rc geninfo_all_blocks=1 00:07:47.879 --rc geninfo_unexecuted_blocks=1 00:07:47.879 00:07:47.879 ' 00:07:47.879 03:16:35 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:47.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.879 --rc genhtml_branch_coverage=1 00:07:47.879 --rc genhtml_function_coverage=1 00:07:47.879 --rc genhtml_legend=1 00:07:47.879 --rc geninfo_all_blocks=1 00:07:47.879 --rc geninfo_unexecuted_blocks=1 00:07:47.879 00:07:47.880 ' 00:07:47.880 03:16:35 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:47.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.880 --rc genhtml_branch_coverage=1 00:07:47.880 --rc genhtml_function_coverage=1 00:07:47.880 --rc genhtml_legend=1 00:07:47.880 --rc geninfo_all_blocks=1 00:07:47.880 --rc geninfo_unexecuted_blocks=1 00:07:47.880 00:07:47.880 ' 00:07:47.880 03:16:35 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:47.880 03:16:35 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:47.880 03:16:35 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:07:47.880 03:16:35 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:07:47.880 03:16:35 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:07:47.880 03:16:35 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:07:47.880 03:16:35 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:07:47.880 03:16:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.880 03:16:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.880 03:16:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:47.880 ************************************ 00:07:47.880 START TEST raid1_resize_data_offset_test 00:07:47.880 ************************************ 00:07:47.880 Process raid pid: 73577 00:07:47.880 03:16:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:07:47.880 03:16:35 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=73577 00:07:47.880 03:16:35 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 73577' 00:07:47.880 03:16:35 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:47.880 03:16:35 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 73577 00:07:47.880 03:16:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 73577 ']' 00:07:47.880 03:16:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.880 03:16:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.880 03:16:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.880 03:16:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.880 03:16:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.138 [2024-11-21 03:16:35.452890] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:07:48.138 [2024-11-21 03:16:35.453196] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.138 [2024-11-21 03:16:35.596703] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:48.138 [2024-11-21 03:16:35.636079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.138 [2024-11-21 03:16:35.668086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.396 [2024-11-21 03:16:35.713128] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.396 [2024-11-21 03:16:35.713261] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.962 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.962 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:07:48.962 03:16:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:07:48.962 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.962 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.962 malloc0 00:07:48.962 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.962 03:16:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:07:48.962 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.962 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.962 malloc1 00:07:48.962 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.962 03:16:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:07:48.962 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.962 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.962 null0 00:07:48.962 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.962 03:16:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:07:48.962 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.962 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.962 [2024-11-21 03:16:36.368972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:07:48.962 [2024-11-21 03:16:36.371136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:48.962 [2024-11-21 03:16:36.371198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:07:48.962 [2024-11-21 03:16:36.371376] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:48.962 [2024-11-21 03:16:36.371404] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:07:48.962 [2024-11-21 03:16:36.371729] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:48.962 [2024-11-21 03:16:36.371884] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:48.963 [2024-11-21 03:16:36.371895] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007400 00:07:48.963 [2024-11-21 03:16:36.372112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:48.963 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.963 03:16:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.963 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.963 03:16:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:48.963 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.963 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.963 03:16:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:07:48.963 03:16:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:07:48.963 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.963 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.963 [2024-11-21 03:16:36.429033] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:07:48.963 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.963 03:16:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:07:48.963 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.963 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.221 malloc2 00:07:49.221 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.221 03:16:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:07:49.221 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.221 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.221 [2024-11-21 03:16:36.560565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:49.221 [2024-11-21 03:16:36.566240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:49.221 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.221 [2024-11-21 03:16:36.568477] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:07:49.221 03:16:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.221 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.221 03:16:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:49.221 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.221 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.221 03:16:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:07:49.221 03:16:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 73577 00:07:49.221 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 73577 ']' 00:07:49.221 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 73577 00:07:49.221 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:07:49.221 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:49.221 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73577 00:07:49.221 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:49.221 killing process with pid 73577 00:07:49.221 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:49.221 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73577' 00:07:49.221 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 73577 00:07:49.221 03:16:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 73577 00:07:49.221 [2024-11-21 03:16:36.658794] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:49.221 [2024-11-21 03:16:36.659586] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:07:49.221 [2024-11-21 03:16:36.659659] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.221 [2024-11-21 03:16:36.659680] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:07:49.221 [2024-11-21 03:16:36.666871] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:49.221 [2024-11-21 03:16:36.667341] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:49.221 [2024-11-21 03:16:36.667364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Raid, state offline 00:07:49.479 [2024-11-21 03:16:36.889378] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:49.736 03:16:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:49.736 00:07:49.736 real 0m1.762s 00:07:49.736 user 0m1.711s 00:07:49.736 sys 0m0.510s 00:07:49.736 03:16:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.736 03:16:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.736 ************************************ 00:07:49.736 END TEST raid1_resize_data_offset_test 00:07:49.736 ************************************ 00:07:49.736 03:16:37 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:49.736 03:16:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:49.736 03:16:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.736 03:16:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:49.736 ************************************ 00:07:49.736 START TEST raid0_resize_superblock_test 00:07:49.736 ************************************ 00:07:49.736 03:16:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:07:49.736 Process raid pid: 73633 00:07:49.736 03:16:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:49.736 03:16:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=73633 00:07:49.736 03:16:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 73633' 00:07:49.736 03:16:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:49.736 03:16:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 73633 00:07:49.736 03:16:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 73633 ']' 00:07:49.736 03:16:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.736 03:16:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.736 03:16:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.736 03:16:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.736 03:16:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.736 [2024-11-21 03:16:37.283488] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:07:49.736 [2024-11-21 03:16:37.283761] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.994 [2024-11-21 03:16:37.425963] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:49.994 [2024-11-21 03:16:37.465826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.994 [2024-11-21 03:16:37.499384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.994 [2024-11-21 03:16:37.543858] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.994 [2024-11-21 03:16:37.544007] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.925 malloc0 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.925 [2024-11-21 03:16:38.274356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:50.925 [2024-11-21 03:16:38.274464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.925 [2024-11-21 03:16:38.274500] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:50.925 [2024-11-21 03:16:38.274524] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.925 [2024-11-21 03:16:38.277343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.925 [2024-11-21 03:16:38.277402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:50.925 pt0 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.925 9970b2b7-8d09-435c-9db9-46620f347cd3 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.925 b1b910b8-e9da-4f08-a981-b848ee8e01ff 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.925 e7d88b10-a704-4d78-9f52-088845f80af4 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.925 [2024-11-21 03:16:38.417859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b1b910b8-e9da-4f08-a981-b848ee8e01ff is claimed 00:07:50.925 [2024-11-21 03:16:38.418142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev e7d88b10-a704-4d78-9f52-088845f80af4 is claimed 00:07:50.925 [2024-11-21 03:16:38.418305] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:50.925 [2024-11-21 03:16:38.418319] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:50.925 [2024-11-21 03:16:38.418693] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:50.925 [2024-11-21 03:16:38.418896] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:50.925 [2024-11-21 03:16:38.418912] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007400 00:07:50.925 [2024-11-21 03:16:38.419107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.925 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.183 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.183 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:51.183 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:51.183 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:51.183 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.183 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.183 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:51.183 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:51.183 [2024-11-21 03:16:38.534201] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.183 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.183 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:51.183 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:51.183 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:51.183 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:51.183 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.183 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.183 [2024-11-21 03:16:38.562169] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:51.184 [2024-11-21 03:16:38.562225] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'b1b910b8-e9da-4f08-a981-b848ee8e01ff' was resized: old size 131072, new size 204800 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.184 [2024-11-21 03:16:38.574102] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:51.184 [2024-11-21 03:16:38.574168] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'e7d88b10-a704-4d78-9f52-088845f80af4' was resized: old size 131072, new size 204800 00:07:51.184 [2024-11-21 03:16:38.574207] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.184 [2024-11-21 03:16:38.682207] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.184 [2024-11-21 03:16:38.713980] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:51.184 [2024-11-21 03:16:38.714112] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:51.184 [2024-11-21 03:16:38.714124] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:51.184 [2024-11-21 03:16:38.714141] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:51.184 [2024-11-21 03:16:38.714287] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:51.184 [2024-11-21 03:16:38.714343] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:51.184 [2024-11-21 03:16:38.714355] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Raid, state offline 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.184 [2024-11-21 03:16:38.725897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:51.184 [2024-11-21 03:16:38.726109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:51.184 [2024-11-21 03:16:38.726150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:51.184 [2024-11-21 03:16:38.726162] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:51.184 [2024-11-21 03:16:38.728799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:51.184 [2024-11-21 03:16:38.728855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:51.184 pt0 00:07:51.184 [2024-11-21 03:16:38.730780] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev b1b910b8-e9da-4f08-a981-b848ee8e01ff 00:07:51.184 [2024-11-21 03:16:38.730837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b1b910b8-e9da-4f08-a981-b848ee8e01ff is claimed 00:07:51.184 [2024-11-21 03:16:38.730953] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev e7d88b10-a704-4d78-9f52-088845f80af4 00:07:51.184 [2024-11-21 03:16:38.730981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev e7d88b10-a704-4d78-9f52-088845f80af4 is claimed 00:07:51.184 [2024-11-21 03:16:38.731116] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev e7d88b10-a704-4d78-9f52-088845f80af4 (2) smaller than existing raid bdev Raid (3) 00:07:51.184 [2024-11-21 03:16:38.731145] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev b1b910b8-e9da-4f08-a981-b848ee8e01ff: File exists 00:07:51.184 [2024-11-21 03:16:38.731194] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:51.184 [2024-11-21 03:16:38.731203] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:51.184 [2024-11-21 03:16:38.731484] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:07:51.184 [2024-11-21 03:16:38.731627] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:51.184 [2024-11-21 03:16:38.731642] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:51.184 [2024-11-21 03:16:38.731780] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.184 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.442 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:51.442 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:51.442 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:51.442 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:51.442 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.442 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.442 [2024-11-21 03:16:38.754294] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.442 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.442 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:51.442 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:51.442 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:51.442 03:16:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 73633 00:07:51.442 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 73633 ']' 00:07:51.442 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 73633 00:07:51.442 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:51.442 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.442 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73633 00:07:51.442 killing process with pid 73633 00:07:51.442 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:51.442 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:51.442 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73633' 00:07:51.442 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 73633 00:07:51.442 [2024-11-21 03:16:38.840327] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:51.442 03:16:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 73633 00:07:51.442 [2024-11-21 03:16:38.840459] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:51.442 [2024-11-21 03:16:38.840518] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:51.442 [2024-11-21 03:16:38.840534] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:51.442 [2024-11-21 03:16:39.005099] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:51.700 ************************************ 00:07:51.700 END TEST raid0_resize_superblock_test 00:07:51.700 ************************************ 00:07:51.700 03:16:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:51.700 00:07:51.700 real 0m2.046s 00:07:51.700 user 0m2.316s 00:07:51.700 sys 0m0.540s 00:07:51.700 03:16:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.700 03:16:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.957 03:16:39 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:51.957 03:16:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:51.957 03:16:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.957 03:16:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:51.957 ************************************ 00:07:51.957 START TEST raid1_resize_superblock_test 00:07:51.957 ************************************ 00:07:51.957 03:16:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:07:51.957 03:16:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:51.957 Process raid pid: 73704 00:07:51.957 03:16:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=73704 00:07:51.957 03:16:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 73704' 00:07:51.957 03:16:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:51.957 03:16:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 73704 00:07:51.957 03:16:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 73704 ']' 00:07:51.957 03:16:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.957 03:16:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.957 03:16:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.957 03:16:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.957 03:16:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.957 [2024-11-21 03:16:39.403071] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:07:51.957 [2024-11-21 03:16:39.403309] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.215 [2024-11-21 03:16:39.546311] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:52.215 [2024-11-21 03:16:39.582115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.215 [2024-11-21 03:16:39.613962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.215 [2024-11-21 03:16:39.658968] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:52.215 [2024-11-21 03:16:39.659034] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:52.779 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.779 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:52.779 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:52.779 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.779 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.037 malloc0 00:07:53.037 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.037 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:53.037 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.037 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.037 [2024-11-21 03:16:40.416262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:53.037 [2024-11-21 03:16:40.416363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.037 [2024-11-21 03:16:40.416399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:53.037 [2024-11-21 03:16:40.416414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.037 [2024-11-21 03:16:40.419211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.037 [2024-11-21 03:16:40.419279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:53.037 pt0 00:07:53.037 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.037 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:53.037 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.037 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.037 7d48bea1-fb3d-445f-80b7-4e10b1cb5279 00:07:53.037 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.037 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:53.037 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.037 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.037 ea52b43f-fd90-4bcd-af8c-bcacb432f55d 00:07:53.037 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.037 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:53.037 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.037 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.037 389bda09-b2f7-488d-aeee-5373cc0cfd02 00:07:53.037 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.037 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:53.037 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:53.037 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.037 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.037 [2024-11-21 03:16:40.559994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ea52b43f-fd90-4bcd-af8c-bcacb432f55d is claimed 00:07:53.037 [2024-11-21 03:16:40.560204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 389bda09-b2f7-488d-aeee-5373cc0cfd02 is claimed 00:07:53.037 [2024-11-21 03:16:40.560383] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:53.037 [2024-11-21 03:16:40.560419] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:53.037 [2024-11-21 03:16:40.560788] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:53.037 [2024-11-21 03:16:40.560977] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:53.037 [2024-11-21 03:16:40.560996] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007400 00:07:53.037 [2024-11-21 03:16:40.561226] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:53.037 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.037 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:53.037 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.037 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.037 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:53.038 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:53.296 [2024-11-21 03:16:40.668312] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.296 [2024-11-21 03:16:40.720284] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:53.296 [2024-11-21 03:16:40.720341] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'ea52b43f-fd90-4bcd-af8c-bcacb432f55d' was resized: old size 131072, new size 204800 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.296 [2024-11-21 03:16:40.732206] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:53.296 [2024-11-21 03:16:40.732260] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '389bda09-b2f7-488d-aeee-5373cc0cfd02' was resized: old size 131072, new size 204800 00:07:53.296 [2024-11-21 03:16:40.732297] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.296 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.296 [2024-11-21 03:16:40.844315] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:53.555 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.555 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:53.555 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:53.555 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:53.555 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:53.555 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.555 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.555 [2024-11-21 03:16:40.888191] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:53.555 [2024-11-21 03:16:40.888314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:53.555 [2024-11-21 03:16:40.888344] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:53.555 [2024-11-21 03:16:40.888568] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:53.555 [2024-11-21 03:16:40.888753] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:53.556 [2024-11-21 03:16:40.888819] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:53.556 [2024-11-21 03:16:40.888832] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Raid, state offline 00:07:53.556 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.556 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:53.556 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.556 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.556 [2024-11-21 03:16:40.900079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:53.556 [2024-11-21 03:16:40.900178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.556 [2024-11-21 03:16:40.900208] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:53.556 [2024-11-21 03:16:40.900219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.556 [2024-11-21 03:16:40.902803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.556 [2024-11-21 03:16:40.902866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:53.556 pt0 00:07:53.556 [2024-11-21 03:16:40.904677] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev ea52b43f-fd90-4bcd-af8c-bcacb432f55d 00:07:53.556 [2024-11-21 03:16:40.904742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ea52b43f-fd90-4bcd-af8c-bcacb432f55d is claimed 00:07:53.556 [2024-11-21 03:16:40.904852] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 389bda09-b2f7-488d-aeee-5373cc0cfd02 00:07:53.556 [2024-11-21 03:16:40.904882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 389bda09-b2f7-488d-aeee-5373cc0cfd02 is claimed 00:07:53.556 [2024-11-21 03:16:40.905090] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 389bda09-b2f7-488d-aeee-5373cc0cfd02 (2) smaller than existing raid bdev Raid (3) 00:07:53.556 [2024-11-21 03:16:40.905113] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev ea52b43f-fd90-4bcd-af8c-bcacb432f55d: File exists 00:07:53.556 [2024-11-21 03:16:40.905164] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:53.556 [2024-11-21 03:16:40.905172] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:53.556 [2024-11-21 03:16:40.905471] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:07:53.556 [2024-11-21 03:16:40.905625] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:53.556 [2024-11-21 03:16:40.905640] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:53.556 [2024-11-21 03:16:40.905783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:53.556 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.556 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:53.556 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.556 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.556 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.556 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:53.556 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:53.556 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.556 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:53.556 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:53.556 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.556 [2024-11-21 03:16:40.929193] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:53.556 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.556 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:53.556 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:53.556 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:53.556 03:16:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 73704 00:07:53.556 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 73704 ']' 00:07:53.556 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 73704 00:07:53.556 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:53.556 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:53.556 03:16:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73704 00:07:53.556 killing process with pid 73704 00:07:53.556 03:16:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:53.556 03:16:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:53.556 03:16:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73704' 00:07:53.556 03:16:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 73704 00:07:53.556 [2024-11-21 03:16:41.015684] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:53.556 [2024-11-21 03:16:41.015816] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:53.556 03:16:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 73704 00:07:53.556 [2024-11-21 03:16:41.015880] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:53.556 [2024-11-21 03:16:41.015895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:53.815 [2024-11-21 03:16:41.185428] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:54.079 ************************************ 00:07:54.079 END TEST raid1_resize_superblock_test 00:07:54.079 ************************************ 00:07:54.079 03:16:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:54.079 00:07:54.079 real 0m2.114s 00:07:54.079 user 0m2.424s 00:07:54.079 sys 0m0.555s 00:07:54.079 03:16:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.079 03:16:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.079 03:16:41 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:54.079 03:16:41 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:54.079 03:16:41 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:54.079 03:16:41 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:54.079 03:16:41 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:54.079 03:16:41 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:54.079 03:16:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:54.079 03:16:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.079 03:16:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:54.079 ************************************ 00:07:54.079 START TEST raid_function_test_raid0 00:07:54.079 ************************************ 00:07:54.079 03:16:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:07:54.079 03:16:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:54.079 03:16:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:54.079 03:16:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:54.079 03:16:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=73779 00:07:54.079 03:16:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:54.079 Process raid pid: 73779 00:07:54.079 03:16:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 73779' 00:07:54.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.079 03:16:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 73779 00:07:54.079 03:16:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 73779 ']' 00:07:54.079 03:16:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.079 03:16:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.079 03:16:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.079 03:16:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.079 03:16:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:54.348 [2024-11-21 03:16:41.641237] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:07:54.348 [2024-11-21 03:16:41.641447] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.348 [2024-11-21 03:16:41.782813] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:54.348 [2024-11-21 03:16:41.809938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.348 [2024-11-21 03:16:41.844705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.348 [2024-11-21 03:16:41.894586] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.348 [2024-11-21 03:16:41.894641] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.918 03:16:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.918 03:16:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:07:54.918 03:16:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:54.918 03:16:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.918 03:16:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:55.178 Base_1 00:07:55.178 03:16:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.178 03:16:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:55.178 03:16:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.178 03:16:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:55.178 Base_2 00:07:55.178 03:16:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.178 03:16:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:55.178 03:16:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.178 03:16:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:55.178 [2024-11-21 03:16:42.520155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:55.178 [2024-11-21 03:16:42.522418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:55.178 [2024-11-21 03:16:42.522510] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:55.178 [2024-11-21 03:16:42.522532] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:55.178 [2024-11-21 03:16:42.522899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:55.178 [2024-11-21 03:16:42.523073] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:55.178 [2024-11-21 03:16:42.523089] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007400 00:07:55.178 [2024-11-21 03:16:42.523285] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.178 03:16:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.178 03:16:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:55.178 03:16:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.179 03:16:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:55.179 03:16:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:55.179 03:16:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.179 03:16:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:55.179 03:16:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:55.179 03:16:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:55.179 03:16:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:55.179 03:16:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:55.179 03:16:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:55.179 03:16:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:55.179 03:16:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:55.179 03:16:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:55.179 03:16:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:55.179 03:16:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:55.179 03:16:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:55.438 [2024-11-21 03:16:42.772217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:07:55.438 /dev/nbd0 00:07:55.438 03:16:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:55.438 03:16:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:55.438 03:16:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:55.438 03:16:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:07:55.438 03:16:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:55.438 03:16:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:55.438 03:16:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:55.438 03:16:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:07:55.438 03:16:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:55.438 03:16:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:55.438 03:16:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:55.438 1+0 records in 00:07:55.438 1+0 records out 00:07:55.438 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432698 s, 9.5 MB/s 00:07:55.438 03:16:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:55.438 03:16:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:07:55.438 03:16:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:55.438 03:16:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:55.438 03:16:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:07:55.438 03:16:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:55.438 03:16:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:55.438 03:16:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:55.438 03:16:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:55.438 03:16:42 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:55.697 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:55.697 { 00:07:55.697 "nbd_device": "/dev/nbd0", 00:07:55.697 "bdev_name": "raid" 00:07:55.697 } 00:07:55.697 ]' 00:07:55.697 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:55.697 { 00:07:55.697 "nbd_device": "/dev/nbd0", 00:07:55.697 "bdev_name": "raid" 00:07:55.697 } 00:07:55.697 ]' 00:07:55.697 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:55.697 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:55.697 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:55.697 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:55.697 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:55.697 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:55.697 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:55.697 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:55.697 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:55.697 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:55.697 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:55.697 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:55.697 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:55.697 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:55.697 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:55.697 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:55.697 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:55.697 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:55.697 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:55.697 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:55.697 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:55.697 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:55.697 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:55.697 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:55.697 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:55.697 4096+0 records in 00:07:55.697 4096+0 records out 00:07:55.697 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0341253 s, 61.5 MB/s 00:07:55.697 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:55.957 4096+0 records in 00:07:55.957 4096+0 records out 00:07:55.957 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.237262 s, 8.8 MB/s 00:07:55.957 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:55.957 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:55.957 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:55.957 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:55.957 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:55.957 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:55.957 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:55.957 128+0 records in 00:07:55.958 128+0 records out 00:07:55.958 65536 bytes (66 kB, 64 KiB) copied, 0.000506229 s, 129 MB/s 00:07:55.958 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:55.958 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:55.958 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:55.958 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:55.958 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:55.958 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:55.958 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:55.958 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:55.958 2035+0 records in 00:07:55.958 2035+0 records out 00:07:55.958 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00913851 s, 114 MB/s 00:07:55.958 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:55.958 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:55.958 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:55.958 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:55.958 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:55.958 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:55.958 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:55.958 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:55.958 456+0 records in 00:07:55.958 456+0 records out 00:07:55.958 233472 bytes (233 kB, 228 KiB) copied, 0.0030591 s, 76.3 MB/s 00:07:55.958 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:55.958 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:55.958 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:55.958 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:55.958 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:55.958 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:55.958 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:55.958 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:55.958 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:55.958 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:55.958 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:55.958 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:55.958 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:56.218 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:56.218 [2024-11-21 03:16:43.720433] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.218 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:56.218 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:56.218 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:56.218 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:56.218 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:56.218 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:56.218 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:56.218 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:56.218 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:56.218 03:16:43 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:56.478 03:16:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:56.478 03:16:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:56.478 03:16:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:56.738 03:16:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:56.738 03:16:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:56.738 03:16:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:56.738 03:16:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:56.738 03:16:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:56.738 03:16:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:56.738 03:16:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:56.738 03:16:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:56.738 03:16:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 73779 00:07:56.738 03:16:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 73779 ']' 00:07:56.739 03:16:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 73779 00:07:56.739 03:16:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:07:56.739 03:16:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.739 03:16:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73779 00:07:56.739 killing process with pid 73779 00:07:56.739 03:16:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.739 03:16:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.739 03:16:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73779' 00:07:56.739 03:16:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 73779 00:07:56.739 [2024-11-21 03:16:44.138514] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:56.739 03:16:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 73779 00:07:56.739 [2024-11-21 03:16:44.138644] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.739 [2024-11-21 03:16:44.138718] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:56.739 [2024-11-21 03:16:44.138736] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid, state offline 00:07:56.739 [2024-11-21 03:16:44.163461] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:56.999 03:16:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:56.999 00:07:56.999 real 0m2.874s 00:07:56.999 user 0m3.561s 00:07:56.999 sys 0m0.997s 00:07:56.999 03:16:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.999 03:16:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:56.999 ************************************ 00:07:56.999 END TEST raid_function_test_raid0 00:07:56.999 ************************************ 00:07:57.000 03:16:44 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:57.000 03:16:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:57.000 03:16:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.000 03:16:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:57.000 ************************************ 00:07:57.000 START TEST raid_function_test_concat 00:07:57.000 ************************************ 00:07:57.000 03:16:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:07:57.000 03:16:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:57.000 03:16:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:57.000 03:16:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:57.000 03:16:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=73896 00:07:57.000 03:16:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:57.000 03:16:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 73896' 00:07:57.000 Process raid pid: 73896 00:07:57.000 03:16:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 73896 00:07:57.000 03:16:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 73896 ']' 00:07:57.000 03:16:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.000 03:16:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.000 03:16:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.000 03:16:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.000 03:16:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:57.000 [2024-11-21 03:16:44.553730] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:07:57.000 [2024-11-21 03:16:44.553887] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.260 [2024-11-21 03:16:44.694846] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:57.260 [2024-11-21 03:16:44.730493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.260 [2024-11-21 03:16:44.762039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.260 [2024-11-21 03:16:44.807516] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.260 [2024-11-21 03:16:44.807560] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.831 03:16:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.831 03:16:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:07:57.831 03:16:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:57.831 03:16:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.831 03:16:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:58.091 Base_1 00:07:58.091 03:16:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.091 03:16:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:58.091 03:16:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.091 03:16:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:58.091 Base_2 00:07:58.091 03:16:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.091 03:16:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:58.091 03:16:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.091 03:16:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:58.091 [2024-11-21 03:16:45.431245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:58.091 [2024-11-21 03:16:45.433334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:58.091 [2024-11-21 03:16:45.433463] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:58.091 [2024-11-21 03:16:45.433478] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:58.091 [2024-11-21 03:16:45.433799] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:58.091 [2024-11-21 03:16:45.433981] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:58.091 [2024-11-21 03:16:45.434012] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007400 00:07:58.091 [2024-11-21 03:16:45.434211] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.091 03:16:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.091 03:16:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:58.091 03:16:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:58.091 03:16:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.091 03:16:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:58.091 03:16:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.091 03:16:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:58.091 03:16:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:58.091 03:16:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:58.091 03:16:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:58.091 03:16:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:58.091 03:16:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:58.091 03:16:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:58.091 03:16:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:58.091 03:16:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:58.091 03:16:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:58.091 03:16:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:58.091 03:16:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:58.351 [2024-11-21 03:16:45.711371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:07:58.351 /dev/nbd0 00:07:58.351 03:16:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:58.351 03:16:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:58.351 03:16:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:58.351 03:16:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:07:58.351 03:16:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:58.351 03:16:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:58.351 03:16:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:58.351 03:16:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:07:58.351 03:16:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:58.351 03:16:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:58.351 03:16:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.351 1+0 records in 00:07:58.351 1+0 records out 00:07:58.351 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420416 s, 9.7 MB/s 00:07:58.351 03:16:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.351 03:16:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:07:58.351 03:16:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.351 03:16:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:58.351 03:16:45 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:07:58.351 03:16:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:58.351 03:16:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:58.351 03:16:45 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:58.351 03:16:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:58.351 03:16:45 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:58.610 03:16:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:58.610 { 00:07:58.610 "nbd_device": "/dev/nbd0", 00:07:58.610 "bdev_name": "raid" 00:07:58.610 } 00:07:58.610 ]' 00:07:58.610 03:16:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:58.610 { 00:07:58.610 "nbd_device": "/dev/nbd0", 00:07:58.610 "bdev_name": "raid" 00:07:58.610 } 00:07:58.610 ]' 00:07:58.610 03:16:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:58.610 03:16:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:58.610 03:16:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:58.610 03:16:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:58.610 03:16:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:58.610 03:16:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:58.610 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:58.610 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:58.610 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:58.610 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:58.610 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:58.610 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:58.610 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:58.610 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:58.610 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:58.610 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:58.610 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:58.610 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:58.610 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:58.610 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:58.610 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:58.610 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:58.610 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:58.610 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:58.610 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:58.610 4096+0 records in 00:07:58.610 4096+0 records out 00:07:58.610 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0346131 s, 60.6 MB/s 00:07:58.610 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:58.870 4096+0 records in 00:07:58.870 4096+0 records out 00:07:58.870 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.212878 s, 9.9 MB/s 00:07:58.870 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:58.870 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:58.870 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:58.870 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:58.870 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:58.870 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:58.870 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:58.870 128+0 records in 00:07:58.870 128+0 records out 00:07:58.870 65536 bytes (66 kB, 64 KiB) copied, 0.0011614 s, 56.4 MB/s 00:07:58.870 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:58.870 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:58.870 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:58.870 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:58.870 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:58.870 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:58.870 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:58.870 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:58.870 2035+0 records in 00:07:58.870 2035+0 records out 00:07:58.870 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0143616 s, 72.5 MB/s 00:07:58.870 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:58.870 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:58.870 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:58.870 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:58.870 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:58.870 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:58.870 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:58.870 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:59.130 456+0 records in 00:07:59.130 456+0 records out 00:07:59.130 233472 bytes (233 kB, 228 KiB) copied, 0.00403406 s, 57.9 MB/s 00:07:59.130 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:59.130 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:59.130 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:59.130 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:59.130 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:59.130 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:59.130 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:59.130 03:16:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:59.131 03:16:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:59.131 03:16:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:59.131 03:16:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:59.131 03:16:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.131 03:16:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:59.391 03:16:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:59.391 [2024-11-21 03:16:46.730868] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:59.391 03:16:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:59.391 03:16:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:59.391 03:16:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.391 03:16:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.391 03:16:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:59.391 03:16:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:59.391 03:16:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.391 03:16:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:59.391 03:16:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:59.391 03:16:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:59.651 03:16:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:59.651 03:16:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:59.651 03:16:46 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:59.651 03:16:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:59.651 03:16:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:59.651 03:16:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:59.651 03:16:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:59.651 03:16:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:59.651 03:16:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:59.651 03:16:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:59.651 03:16:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:59.651 03:16:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 73896 00:07:59.651 03:16:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 73896 ']' 00:07:59.651 03:16:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 73896 00:07:59.651 03:16:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:07:59.651 03:16:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.651 03:16:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73896 00:07:59.651 03:16:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:59.651 03:16:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:59.651 killing process with pid 73896 00:07:59.651 03:16:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73896' 00:07:59.651 03:16:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 73896 00:07:59.651 [2024-11-21 03:16:47.093404] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:59.651 [2024-11-21 03:16:47.093551] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.651 03:16:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 73896 00:07:59.651 [2024-11-21 03:16:47.093660] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:59.651 [2024-11-21 03:16:47.093675] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid, state offline 00:07:59.651 [2024-11-21 03:16:47.117545] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:59.911 03:16:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:59.911 00:07:59.911 real 0m2.882s 00:07:59.911 user 0m3.588s 00:07:59.911 sys 0m1.014s 00:07:59.911 03:16:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.911 03:16:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:59.911 ************************************ 00:07:59.911 END TEST raid_function_test_concat 00:07:59.911 ************************************ 00:07:59.911 03:16:47 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:59.911 03:16:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:59.911 03:16:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.911 03:16:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:59.911 ************************************ 00:07:59.911 START TEST raid0_resize_test 00:07:59.911 ************************************ 00:07:59.911 03:16:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:07:59.911 03:16:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:59.911 03:16:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:59.911 03:16:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:59.911 03:16:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:59.911 03:16:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:59.911 03:16:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:59.911 03:16:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:59.911 03:16:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:59.911 03:16:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=74009 00:07:59.911 03:16:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:59.911 Process raid pid: 74009 00:07:59.911 03:16:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 74009' 00:07:59.911 03:16:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 74009 00:07:59.911 03:16:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 74009 ']' 00:07:59.911 03:16:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.911 03:16:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.911 03:16:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.911 03:16:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.911 03:16:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.172 [2024-11-21 03:16:47.511692] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:08:00.172 [2024-11-21 03:16:47.511852] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.172 [2024-11-21 03:16:47.658163] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:00.172 [2024-11-21 03:16:47.694983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.172 [2024-11-21 03:16:47.725619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.432 [2024-11-21 03:16:47.769060] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.432 [2024-11-21 03:16:47.769102] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.002 Base_1 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.002 Base_2 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.002 [2024-11-21 03:16:48.393082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:01.002 [2024-11-21 03:16:48.395237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:01.002 [2024-11-21 03:16:48.395322] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:08:01.002 [2024-11-21 03:16:48.395331] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:01.002 [2024-11-21 03:16:48.395666] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:01.002 [2024-11-21 03:16:48.395788] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:08:01.002 [2024-11-21 03:16:48.395808] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007400 00:08:01.002 [2024-11-21 03:16:48.395965] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.002 [2024-11-21 03:16:48.405033] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:01.002 [2024-11-21 03:16:48.405077] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:01.002 true 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.002 [2024-11-21 03:16:48.421281] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.002 [2024-11-21 03:16:48.465098] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:01.002 [2024-11-21 03:16:48.465160] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:01.002 [2024-11-21 03:16:48.465191] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:08:01.002 true 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:01.002 [2024-11-21 03:16:48.477302] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 74009 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 74009 ']' 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 74009 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:01.002 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74009 00:08:01.263 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:01.263 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:01.263 killing process with pid 74009 00:08:01.263 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74009' 00:08:01.263 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 74009 00:08:01.263 [2024-11-21 03:16:48.574185] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:01.263 [2024-11-21 03:16:48.574354] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:01.263 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 74009 00:08:01.263 [2024-11-21 03:16:48.574426] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:01.263 [2024-11-21 03:16:48.574448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Raid, state offline 00:08:01.263 [2024-11-21 03:16:48.576150] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:01.263 03:16:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:01.263 00:08:01.263 real 0m1.381s 00:08:01.263 user 0m1.564s 00:08:01.263 sys 0m0.331s 00:08:01.263 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.263 03:16:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.263 ************************************ 00:08:01.263 END TEST raid0_resize_test 00:08:01.263 ************************************ 00:08:01.524 03:16:48 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:08:01.524 03:16:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:01.524 03:16:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.524 03:16:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:01.524 ************************************ 00:08:01.524 START TEST raid1_resize_test 00:08:01.524 ************************************ 00:08:01.524 03:16:48 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:08:01.524 03:16:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:08:01.524 03:16:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:08:01.524 03:16:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:08:01.524 03:16:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:08:01.524 03:16:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:08:01.524 03:16:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:08:01.524 03:16:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:08:01.524 03:16:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:08:01.524 03:16:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=74059 00:08:01.524 Process raid pid: 74059 00:08:01.524 03:16:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:01.524 03:16:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 74059' 00:08:01.524 03:16:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 74059 00:08:01.524 03:16:48 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 74059 ']' 00:08:01.524 03:16:48 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.524 03:16:48 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.524 03:16:48 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.524 03:16:48 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.524 03:16:48 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.524 [2024-11-21 03:16:48.947584] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:08:01.524 [2024-11-21 03:16:48.947719] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.524 [2024-11-21 03:16:49.086165] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:01.783 [2024-11-21 03:16:49.122386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.783 [2024-11-21 03:16:49.153251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.783 [2024-11-21 03:16:49.196230] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.783 [2024-11-21 03:16:49.196291] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.351 03:16:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.351 03:16:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:08:02.351 03:16:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:02.351 03:16:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.351 03:16:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.351 Base_1 00:08:02.351 03:16:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.351 03:16:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:02.351 03:16:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.351 03:16:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.351 Base_2 00:08:02.351 03:16:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.351 03:16:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:08:02.351 03:16:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:02.351 03:16:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.351 03:16:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.351 [2024-11-21 03:16:49.875326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:02.351 [2024-11-21 03:16:49.877492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:02.351 [2024-11-21 03:16:49.877582] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:08:02.351 [2024-11-21 03:16:49.877592] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:02.351 [2024-11-21 03:16:49.877944] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:02.351 [2024-11-21 03:16:49.878100] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:08:02.351 [2024-11-21 03:16:49.878121] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007400 00:08:02.351 [2024-11-21 03:16:49.878280] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.351 03:16:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.351 03:16:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:02.351 03:16:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.351 03:16:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.351 [2024-11-21 03:16:49.887284] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:02.351 [2024-11-21 03:16:49.887328] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:02.351 true 00:08:02.351 03:16:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.351 03:16:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:02.351 03:16:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:02.351 03:16:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.351 03:16:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.351 [2024-11-21 03:16:49.903523] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:02.610 03:16:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.610 03:16:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:08:02.610 03:16:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:08:02.610 03:16:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:08:02.610 03:16:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:08:02.610 03:16:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:08:02.610 03:16:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:02.610 03:16:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.610 03:16:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.610 [2024-11-21 03:16:49.931342] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:02.610 [2024-11-21 03:16:49.931393] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:02.610 [2024-11-21 03:16:49.931425] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:08:02.610 true 00:08:02.610 03:16:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.610 03:16:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:02.610 03:16:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:02.610 03:16:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.610 03:16:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.610 [2024-11-21 03:16:49.947504] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:02.610 03:16:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.611 03:16:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:08:02.611 03:16:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:08:02.611 03:16:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:08:02.611 03:16:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:08:02.611 03:16:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:08:02.611 03:16:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 74059 00:08:02.611 03:16:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 74059 ']' 00:08:02.611 03:16:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 74059 00:08:02.611 03:16:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:08:02.611 03:16:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:02.611 03:16:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74059 00:08:02.611 03:16:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:02.611 03:16:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:02.611 killing process with pid 74059 00:08:02.611 03:16:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74059' 00:08:02.611 03:16:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 74059 00:08:02.611 [2024-11-21 03:16:49.995568] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:02.611 [2024-11-21 03:16:49.995714] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:02.611 03:16:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 74059 00:08:02.611 [2024-11-21 03:16:49.996228] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:02.611 [2024-11-21 03:16:49.996256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Raid, state offline 00:08:02.611 [2024-11-21 03:16:49.997491] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:02.870 03:16:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:02.870 00:08:02.870 real 0m1.359s 00:08:02.870 user 0m1.528s 00:08:02.870 sys 0m0.320s 00:08:02.870 03:16:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.870 03:16:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.870 ************************************ 00:08:02.870 END TEST raid1_resize_test 00:08:02.870 ************************************ 00:08:02.870 03:16:50 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:02.870 03:16:50 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:02.870 03:16:50 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:08:02.870 03:16:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:02.870 03:16:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.870 03:16:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:02.870 ************************************ 00:08:02.870 START TEST raid_state_function_test 00:08:02.870 ************************************ 00:08:02.870 03:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:08:02.870 03:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:02.870 03:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:02.870 03:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:02.870 03:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:02.870 03:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:02.870 03:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:02.870 03:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:02.870 03:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:02.870 03:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:02.870 03:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:02.870 03:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:02.870 03:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:02.870 03:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:02.870 03:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:02.870 03:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:02.870 03:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:02.870 03:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:02.870 03:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:02.870 03:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:02.870 03:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:02.870 03:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:02.870 03:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:02.870 03:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:02.870 03:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74111 00:08:02.870 Process raid pid: 74111 00:08:02.870 03:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74111' 00:08:02.870 03:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:02.870 03:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74111 00:08:02.870 03:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 74111 ']' 00:08:02.870 03:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.870 03:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.871 03:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.871 03:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.871 03:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.871 [2024-11-21 03:16:50.387482] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:08:02.871 [2024-11-21 03:16:50.387621] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.129 [2024-11-21 03:16:50.526320] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:03.129 [2024-11-21 03:16:50.568369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.129 [2024-11-21 03:16:50.599184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.129 [2024-11-21 03:16:50.643209] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:03.129 [2024-11-21 03:16:50.643253] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:03.696 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.696 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:03.696 03:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:03.696 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.696 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.954 [2024-11-21 03:16:51.263037] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:03.954 [2024-11-21 03:16:51.263097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:03.954 [2024-11-21 03:16:51.263112] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:03.954 [2024-11-21 03:16:51.263122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:03.954 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.954 03:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:03.954 03:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.954 03:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.954 03:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:03.954 03:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.954 03:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.954 03:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.954 03:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.954 03:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.954 03:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.954 03:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.954 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.954 03:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.954 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.954 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.954 03:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.954 "name": "Existed_Raid", 00:08:03.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.954 "strip_size_kb": 64, 00:08:03.954 "state": "configuring", 00:08:03.954 "raid_level": "raid0", 00:08:03.954 "superblock": false, 00:08:03.954 "num_base_bdevs": 2, 00:08:03.954 "num_base_bdevs_discovered": 0, 00:08:03.954 "num_base_bdevs_operational": 2, 00:08:03.954 "base_bdevs_list": [ 00:08:03.954 { 00:08:03.954 "name": "BaseBdev1", 00:08:03.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.954 "is_configured": false, 00:08:03.954 "data_offset": 0, 00:08:03.954 "data_size": 0 00:08:03.954 }, 00:08:03.954 { 00:08:03.954 "name": "BaseBdev2", 00:08:03.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.954 "is_configured": false, 00:08:03.954 "data_offset": 0, 00:08:03.954 "data_size": 0 00:08:03.954 } 00:08:03.954 ] 00:08:03.954 }' 00:08:03.954 03:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.954 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.212 03:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:04.212 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.212 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.212 [2024-11-21 03:16:51.699042] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:04.212 [2024-11-21 03:16:51.699095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:08:04.212 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.212 03:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:04.212 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.212 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.212 [2024-11-21 03:16:51.711079] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:04.212 [2024-11-21 03:16:51.711129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:04.212 [2024-11-21 03:16:51.711139] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:04.212 [2024-11-21 03:16:51.711149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:04.212 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.212 03:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:04.212 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.212 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.212 [2024-11-21 03:16:51.728056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:04.213 BaseBdev1 00:08:04.213 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.213 03:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:04.213 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:04.213 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:04.213 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:04.213 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:04.213 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:04.213 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:04.213 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.213 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.213 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.213 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:04.213 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.213 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.213 [ 00:08:04.213 { 00:08:04.213 "name": "BaseBdev1", 00:08:04.213 "aliases": [ 00:08:04.213 "83624b04-471c-4e1d-9524-ccb36c4597ae" 00:08:04.213 ], 00:08:04.213 "product_name": "Malloc disk", 00:08:04.213 "block_size": 512, 00:08:04.213 "num_blocks": 65536, 00:08:04.213 "uuid": "83624b04-471c-4e1d-9524-ccb36c4597ae", 00:08:04.213 "assigned_rate_limits": { 00:08:04.213 "rw_ios_per_sec": 0, 00:08:04.213 "rw_mbytes_per_sec": 0, 00:08:04.213 "r_mbytes_per_sec": 0, 00:08:04.213 "w_mbytes_per_sec": 0 00:08:04.213 }, 00:08:04.213 "claimed": true, 00:08:04.213 "claim_type": "exclusive_write", 00:08:04.213 "zoned": false, 00:08:04.213 "supported_io_types": { 00:08:04.213 "read": true, 00:08:04.213 "write": true, 00:08:04.213 "unmap": true, 00:08:04.213 "flush": true, 00:08:04.213 "reset": true, 00:08:04.213 "nvme_admin": false, 00:08:04.213 "nvme_io": false, 00:08:04.213 "nvme_io_md": false, 00:08:04.213 "write_zeroes": true, 00:08:04.213 "zcopy": true, 00:08:04.213 "get_zone_info": false, 00:08:04.213 "zone_management": false, 00:08:04.213 "zone_append": false, 00:08:04.213 "compare": false, 00:08:04.213 "compare_and_write": false, 00:08:04.213 "abort": true, 00:08:04.213 "seek_hole": false, 00:08:04.213 "seek_data": false, 00:08:04.213 "copy": true, 00:08:04.213 "nvme_iov_md": false 00:08:04.213 }, 00:08:04.213 "memory_domains": [ 00:08:04.213 { 00:08:04.213 "dma_device_id": "system", 00:08:04.213 "dma_device_type": 1 00:08:04.213 }, 00:08:04.213 { 00:08:04.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.213 "dma_device_type": 2 00:08:04.213 } 00:08:04.213 ], 00:08:04.213 "driver_specific": {} 00:08:04.213 } 00:08:04.213 ] 00:08:04.213 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.213 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:04.213 03:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:04.213 03:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.213 03:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.213 03:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.213 03:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.213 03:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.213 03:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.213 03:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.213 03:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.213 03:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.213 03:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.213 03:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.213 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.213 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.471 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.471 03:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.471 "name": "Existed_Raid", 00:08:04.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.472 "strip_size_kb": 64, 00:08:04.472 "state": "configuring", 00:08:04.472 "raid_level": "raid0", 00:08:04.472 "superblock": false, 00:08:04.472 "num_base_bdevs": 2, 00:08:04.472 "num_base_bdevs_discovered": 1, 00:08:04.472 "num_base_bdevs_operational": 2, 00:08:04.472 "base_bdevs_list": [ 00:08:04.472 { 00:08:04.472 "name": "BaseBdev1", 00:08:04.472 "uuid": "83624b04-471c-4e1d-9524-ccb36c4597ae", 00:08:04.472 "is_configured": true, 00:08:04.472 "data_offset": 0, 00:08:04.472 "data_size": 65536 00:08:04.472 }, 00:08:04.472 { 00:08:04.472 "name": "BaseBdev2", 00:08:04.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.472 "is_configured": false, 00:08:04.472 "data_offset": 0, 00:08:04.472 "data_size": 0 00:08:04.472 } 00:08:04.472 ] 00:08:04.472 }' 00:08:04.472 03:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.472 03:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.731 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:04.731 03:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.731 03:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.731 [2024-11-21 03:16:52.196285] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:04.731 [2024-11-21 03:16:52.196365] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:04.731 03:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.731 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:04.731 03:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.731 03:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.731 [2024-11-21 03:16:52.208349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:04.731 [2024-11-21 03:16:52.210476] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:04.731 [2024-11-21 03:16:52.210525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:04.731 03:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.731 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:04.731 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:04.731 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:04.731 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.731 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.731 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.731 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.731 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.731 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.731 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.731 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.731 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.731 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.731 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.731 03:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.731 03:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.731 03:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.731 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.731 "name": "Existed_Raid", 00:08:04.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.731 "strip_size_kb": 64, 00:08:04.731 "state": "configuring", 00:08:04.731 "raid_level": "raid0", 00:08:04.731 "superblock": false, 00:08:04.731 "num_base_bdevs": 2, 00:08:04.731 "num_base_bdevs_discovered": 1, 00:08:04.731 "num_base_bdevs_operational": 2, 00:08:04.731 "base_bdevs_list": [ 00:08:04.731 { 00:08:04.731 "name": "BaseBdev1", 00:08:04.731 "uuid": "83624b04-471c-4e1d-9524-ccb36c4597ae", 00:08:04.731 "is_configured": true, 00:08:04.731 "data_offset": 0, 00:08:04.731 "data_size": 65536 00:08:04.731 }, 00:08:04.731 { 00:08:04.731 "name": "BaseBdev2", 00:08:04.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.731 "is_configured": false, 00:08:04.731 "data_offset": 0, 00:08:04.731 "data_size": 0 00:08:04.731 } 00:08:04.731 ] 00:08:04.731 }' 00:08:04.731 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.731 03:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.296 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:05.296 03:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.296 03:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.296 [2024-11-21 03:16:52.599641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:05.296 [2024-11-21 03:16:52.599698] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:05.296 [2024-11-21 03:16:52.599713] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:05.296 [2024-11-21 03:16:52.600005] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:08:05.296 [2024-11-21 03:16:52.600192] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:05.296 [2024-11-21 03:16:52.600213] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:08:05.296 [2024-11-21 03:16:52.600437] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:05.296 BaseBdev2 00:08:05.296 03:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.296 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:05.296 03:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:05.296 03:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:05.296 03:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:05.296 03:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:05.296 03:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:05.296 03:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:05.296 03:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.296 03:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.296 03:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.296 03:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:05.296 03:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.296 03:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.296 [ 00:08:05.296 { 00:08:05.296 "name": "BaseBdev2", 00:08:05.296 "aliases": [ 00:08:05.296 "2d82c292-3f71-453a-acb4-642ec18327ba" 00:08:05.296 ], 00:08:05.296 "product_name": "Malloc disk", 00:08:05.296 "block_size": 512, 00:08:05.296 "num_blocks": 65536, 00:08:05.296 "uuid": "2d82c292-3f71-453a-acb4-642ec18327ba", 00:08:05.296 "assigned_rate_limits": { 00:08:05.296 "rw_ios_per_sec": 0, 00:08:05.296 "rw_mbytes_per_sec": 0, 00:08:05.296 "r_mbytes_per_sec": 0, 00:08:05.296 "w_mbytes_per_sec": 0 00:08:05.296 }, 00:08:05.296 "claimed": true, 00:08:05.296 "claim_type": "exclusive_write", 00:08:05.296 "zoned": false, 00:08:05.296 "supported_io_types": { 00:08:05.296 "read": true, 00:08:05.296 "write": true, 00:08:05.296 "unmap": true, 00:08:05.296 "flush": true, 00:08:05.296 "reset": true, 00:08:05.296 "nvme_admin": false, 00:08:05.296 "nvme_io": false, 00:08:05.296 "nvme_io_md": false, 00:08:05.296 "write_zeroes": true, 00:08:05.296 "zcopy": true, 00:08:05.296 "get_zone_info": false, 00:08:05.296 "zone_management": false, 00:08:05.296 "zone_append": false, 00:08:05.296 "compare": false, 00:08:05.296 "compare_and_write": false, 00:08:05.296 "abort": true, 00:08:05.296 "seek_hole": false, 00:08:05.296 "seek_data": false, 00:08:05.296 "copy": true, 00:08:05.296 "nvme_iov_md": false 00:08:05.296 }, 00:08:05.296 "memory_domains": [ 00:08:05.296 { 00:08:05.296 "dma_device_id": "system", 00:08:05.296 "dma_device_type": 1 00:08:05.296 }, 00:08:05.296 { 00:08:05.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.297 "dma_device_type": 2 00:08:05.297 } 00:08:05.297 ], 00:08:05.297 "driver_specific": {} 00:08:05.297 } 00:08:05.297 ] 00:08:05.297 03:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.297 03:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:05.297 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:05.297 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:05.297 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:05.297 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.297 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:05.297 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.297 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.297 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:05.297 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.297 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.297 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.297 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.297 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.297 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.297 03:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.297 03:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.297 03:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.297 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.297 "name": "Existed_Raid", 00:08:05.297 "uuid": "91cd61ec-cf6d-4587-9c1b-21aeff7c4008", 00:08:05.297 "strip_size_kb": 64, 00:08:05.297 "state": "online", 00:08:05.297 "raid_level": "raid0", 00:08:05.297 "superblock": false, 00:08:05.297 "num_base_bdevs": 2, 00:08:05.297 "num_base_bdevs_discovered": 2, 00:08:05.297 "num_base_bdevs_operational": 2, 00:08:05.297 "base_bdevs_list": [ 00:08:05.297 { 00:08:05.297 "name": "BaseBdev1", 00:08:05.297 "uuid": "83624b04-471c-4e1d-9524-ccb36c4597ae", 00:08:05.297 "is_configured": true, 00:08:05.297 "data_offset": 0, 00:08:05.297 "data_size": 65536 00:08:05.297 }, 00:08:05.297 { 00:08:05.297 "name": "BaseBdev2", 00:08:05.297 "uuid": "2d82c292-3f71-453a-acb4-642ec18327ba", 00:08:05.297 "is_configured": true, 00:08:05.297 "data_offset": 0, 00:08:05.297 "data_size": 65536 00:08:05.297 } 00:08:05.297 ] 00:08:05.297 }' 00:08:05.297 03:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.297 03:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.555 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:05.555 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:05.555 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:05.555 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:05.555 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:05.555 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:05.555 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:05.555 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:05.555 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.555 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.555 [2024-11-21 03:16:53.060219] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:05.555 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.556 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:05.556 "name": "Existed_Raid", 00:08:05.556 "aliases": [ 00:08:05.556 "91cd61ec-cf6d-4587-9c1b-21aeff7c4008" 00:08:05.556 ], 00:08:05.556 "product_name": "Raid Volume", 00:08:05.556 "block_size": 512, 00:08:05.556 "num_blocks": 131072, 00:08:05.556 "uuid": "91cd61ec-cf6d-4587-9c1b-21aeff7c4008", 00:08:05.556 "assigned_rate_limits": { 00:08:05.556 "rw_ios_per_sec": 0, 00:08:05.556 "rw_mbytes_per_sec": 0, 00:08:05.556 "r_mbytes_per_sec": 0, 00:08:05.556 "w_mbytes_per_sec": 0 00:08:05.556 }, 00:08:05.556 "claimed": false, 00:08:05.556 "zoned": false, 00:08:05.556 "supported_io_types": { 00:08:05.556 "read": true, 00:08:05.556 "write": true, 00:08:05.556 "unmap": true, 00:08:05.556 "flush": true, 00:08:05.556 "reset": true, 00:08:05.556 "nvme_admin": false, 00:08:05.556 "nvme_io": false, 00:08:05.556 "nvme_io_md": false, 00:08:05.556 "write_zeroes": true, 00:08:05.556 "zcopy": false, 00:08:05.556 "get_zone_info": false, 00:08:05.556 "zone_management": false, 00:08:05.556 "zone_append": false, 00:08:05.556 "compare": false, 00:08:05.556 "compare_and_write": false, 00:08:05.556 "abort": false, 00:08:05.556 "seek_hole": false, 00:08:05.556 "seek_data": false, 00:08:05.556 "copy": false, 00:08:05.556 "nvme_iov_md": false 00:08:05.556 }, 00:08:05.556 "memory_domains": [ 00:08:05.556 { 00:08:05.556 "dma_device_id": "system", 00:08:05.556 "dma_device_type": 1 00:08:05.556 }, 00:08:05.556 { 00:08:05.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.556 "dma_device_type": 2 00:08:05.556 }, 00:08:05.556 { 00:08:05.556 "dma_device_id": "system", 00:08:05.556 "dma_device_type": 1 00:08:05.556 }, 00:08:05.556 { 00:08:05.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.556 "dma_device_type": 2 00:08:05.556 } 00:08:05.556 ], 00:08:05.556 "driver_specific": { 00:08:05.556 "raid": { 00:08:05.556 "uuid": "91cd61ec-cf6d-4587-9c1b-21aeff7c4008", 00:08:05.556 "strip_size_kb": 64, 00:08:05.556 "state": "online", 00:08:05.556 "raid_level": "raid0", 00:08:05.556 "superblock": false, 00:08:05.556 "num_base_bdevs": 2, 00:08:05.556 "num_base_bdevs_discovered": 2, 00:08:05.556 "num_base_bdevs_operational": 2, 00:08:05.556 "base_bdevs_list": [ 00:08:05.556 { 00:08:05.556 "name": "BaseBdev1", 00:08:05.556 "uuid": "83624b04-471c-4e1d-9524-ccb36c4597ae", 00:08:05.556 "is_configured": true, 00:08:05.556 "data_offset": 0, 00:08:05.556 "data_size": 65536 00:08:05.556 }, 00:08:05.556 { 00:08:05.556 "name": "BaseBdev2", 00:08:05.556 "uuid": "2d82c292-3f71-453a-acb4-642ec18327ba", 00:08:05.556 "is_configured": true, 00:08:05.556 "data_offset": 0, 00:08:05.556 "data_size": 65536 00:08:05.556 } 00:08:05.556 ] 00:08:05.556 } 00:08:05.556 } 00:08:05.556 }' 00:08:05.556 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:05.556 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:05.556 BaseBdev2' 00:08:05.556 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.815 [2024-11-21 03:16:53.256011] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:05.815 [2024-11-21 03:16:53.256076] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:05.815 [2024-11-21 03:16:53.256139] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.815 "name": "Existed_Raid", 00:08:05.815 "uuid": "91cd61ec-cf6d-4587-9c1b-21aeff7c4008", 00:08:05.815 "strip_size_kb": 64, 00:08:05.815 "state": "offline", 00:08:05.815 "raid_level": "raid0", 00:08:05.815 "superblock": false, 00:08:05.815 "num_base_bdevs": 2, 00:08:05.815 "num_base_bdevs_discovered": 1, 00:08:05.815 "num_base_bdevs_operational": 1, 00:08:05.815 "base_bdevs_list": [ 00:08:05.815 { 00:08:05.815 "name": null, 00:08:05.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.815 "is_configured": false, 00:08:05.815 "data_offset": 0, 00:08:05.815 "data_size": 65536 00:08:05.815 }, 00:08:05.815 { 00:08:05.815 "name": "BaseBdev2", 00:08:05.815 "uuid": "2d82c292-3f71-453a-acb4-642ec18327ba", 00:08:05.815 "is_configured": true, 00:08:05.815 "data_offset": 0, 00:08:05.815 "data_size": 65536 00:08:05.815 } 00:08:05.815 ] 00:08:05.815 }' 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.815 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.382 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:06.382 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:06.382 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:06.382 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.382 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.382 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.382 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.382 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:06.382 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:06.382 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:06.382 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.382 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.382 [2024-11-21 03:16:53.771943] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:06.382 [2024-11-21 03:16:53.772031] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:08:06.383 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.383 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:06.383 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:06.383 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.383 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:06.383 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.383 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.383 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.383 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:06.383 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:06.383 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:06.383 03:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74111 00:08:06.383 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 74111 ']' 00:08:06.383 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 74111 00:08:06.383 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:06.383 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:06.383 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74111 00:08:06.383 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:06.383 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:06.383 killing process with pid 74111 00:08:06.383 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74111' 00:08:06.383 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 74111 00:08:06.383 [2024-11-21 03:16:53.869225] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:06.383 03:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 74111 00:08:06.383 [2024-11-21 03:16:53.870322] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:06.643 00:08:06.643 real 0m3.809s 00:08:06.643 user 0m5.983s 00:08:06.643 sys 0m0.787s 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.643 ************************************ 00:08:06.643 END TEST raid_state_function_test 00:08:06.643 ************************************ 00:08:06.643 03:16:54 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:08:06.643 03:16:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:06.643 03:16:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.643 03:16:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:06.643 ************************************ 00:08:06.643 START TEST raid_state_function_test_sb 00:08:06.643 ************************************ 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74347 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74347' 00:08:06.643 Process raid pid: 74347 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74347 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74347 ']' 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.643 03:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.901 [2024-11-21 03:16:54.266733] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:08:06.901 [2024-11-21 03:16:54.266870] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.901 [2024-11-21 03:16:54.406222] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:06.901 [2024-11-21 03:16:54.445238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.184 [2024-11-21 03:16:54.477046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.184 [2024-11-21 03:16:54.521312] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.184 [2024-11-21 03:16:54.521357] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.753 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.753 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:07.753 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:07.753 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.753 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.753 [2024-11-21 03:16:55.144597] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:07.753 [2024-11-21 03:16:55.144666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:07.753 [2024-11-21 03:16:55.144682] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:07.753 [2024-11-21 03:16:55.144691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:07.753 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.753 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:07.753 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.753 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:07.753 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:07.753 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.753 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.753 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.753 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.753 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.753 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.753 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.753 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.753 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.753 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.753 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.753 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.753 "name": "Existed_Raid", 00:08:07.753 "uuid": "04e6600c-e0ce-4d44-93e2-1df77b815571", 00:08:07.753 "strip_size_kb": 64, 00:08:07.753 "state": "configuring", 00:08:07.753 "raid_level": "raid0", 00:08:07.753 "superblock": true, 00:08:07.753 "num_base_bdevs": 2, 00:08:07.753 "num_base_bdevs_discovered": 0, 00:08:07.753 "num_base_bdevs_operational": 2, 00:08:07.753 "base_bdevs_list": [ 00:08:07.753 { 00:08:07.753 "name": "BaseBdev1", 00:08:07.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.753 "is_configured": false, 00:08:07.753 "data_offset": 0, 00:08:07.753 "data_size": 0 00:08:07.753 }, 00:08:07.753 { 00:08:07.753 "name": "BaseBdev2", 00:08:07.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.753 "is_configured": false, 00:08:07.753 "data_offset": 0, 00:08:07.753 "data_size": 0 00:08:07.753 } 00:08:07.753 ] 00:08:07.753 }' 00:08:07.753 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.753 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.011 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:08.012 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.012 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.012 [2024-11-21 03:16:55.572612] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:08.012 [2024-11-21 03:16:55.572670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.271 [2024-11-21 03:16:55.584664] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:08.271 [2024-11-21 03:16:55.584717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:08.271 [2024-11-21 03:16:55.584730] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:08.271 [2024-11-21 03:16:55.584742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.271 [2024-11-21 03:16:55.605989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:08.271 BaseBdev1 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.271 [ 00:08:08.271 { 00:08:08.271 "name": "BaseBdev1", 00:08:08.271 "aliases": [ 00:08:08.271 "0f00a5eb-925b-4e9f-b0ae-80288beabdb8" 00:08:08.271 ], 00:08:08.271 "product_name": "Malloc disk", 00:08:08.271 "block_size": 512, 00:08:08.271 "num_blocks": 65536, 00:08:08.271 "uuid": "0f00a5eb-925b-4e9f-b0ae-80288beabdb8", 00:08:08.271 "assigned_rate_limits": { 00:08:08.271 "rw_ios_per_sec": 0, 00:08:08.271 "rw_mbytes_per_sec": 0, 00:08:08.271 "r_mbytes_per_sec": 0, 00:08:08.271 "w_mbytes_per_sec": 0 00:08:08.271 }, 00:08:08.271 "claimed": true, 00:08:08.271 "claim_type": "exclusive_write", 00:08:08.271 "zoned": false, 00:08:08.271 "supported_io_types": { 00:08:08.271 "read": true, 00:08:08.271 "write": true, 00:08:08.271 "unmap": true, 00:08:08.271 "flush": true, 00:08:08.271 "reset": true, 00:08:08.271 "nvme_admin": false, 00:08:08.271 "nvme_io": false, 00:08:08.271 "nvme_io_md": false, 00:08:08.271 "write_zeroes": true, 00:08:08.271 "zcopy": true, 00:08:08.271 "get_zone_info": false, 00:08:08.271 "zone_management": false, 00:08:08.271 "zone_append": false, 00:08:08.271 "compare": false, 00:08:08.271 "compare_and_write": false, 00:08:08.271 "abort": true, 00:08:08.271 "seek_hole": false, 00:08:08.271 "seek_data": false, 00:08:08.271 "copy": true, 00:08:08.271 "nvme_iov_md": false 00:08:08.271 }, 00:08:08.271 "memory_domains": [ 00:08:08.271 { 00:08:08.271 "dma_device_id": "system", 00:08:08.271 "dma_device_type": 1 00:08:08.271 }, 00:08:08.271 { 00:08:08.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.271 "dma_device_type": 2 00:08:08.271 } 00:08:08.271 ], 00:08:08.271 "driver_specific": {} 00:08:08.271 } 00:08:08.271 ] 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.271 "name": "Existed_Raid", 00:08:08.271 "uuid": "f375b5f3-cb76-4e65-98f9-cca32c40a1cf", 00:08:08.271 "strip_size_kb": 64, 00:08:08.271 "state": "configuring", 00:08:08.271 "raid_level": "raid0", 00:08:08.271 "superblock": true, 00:08:08.271 "num_base_bdevs": 2, 00:08:08.271 "num_base_bdevs_discovered": 1, 00:08:08.271 "num_base_bdevs_operational": 2, 00:08:08.271 "base_bdevs_list": [ 00:08:08.271 { 00:08:08.271 "name": "BaseBdev1", 00:08:08.271 "uuid": "0f00a5eb-925b-4e9f-b0ae-80288beabdb8", 00:08:08.271 "is_configured": true, 00:08:08.271 "data_offset": 2048, 00:08:08.271 "data_size": 63488 00:08:08.271 }, 00:08:08.271 { 00:08:08.271 "name": "BaseBdev2", 00:08:08.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.271 "is_configured": false, 00:08:08.271 "data_offset": 0, 00:08:08.271 "data_size": 0 00:08:08.271 } 00:08:08.271 ] 00:08:08.271 }' 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.271 03:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.531 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:08.531 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.531 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.531 [2024-11-21 03:16:56.042206] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:08.531 [2024-11-21 03:16:56.042283] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:08.531 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.531 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:08.531 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.531 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.531 [2024-11-21 03:16:56.054263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:08.531 [2024-11-21 03:16:56.056441] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:08.531 [2024-11-21 03:16:56.056492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:08.531 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.531 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:08.531 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:08.531 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:08.531 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.531 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.531 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.531 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.531 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.531 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.531 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.531 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.531 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.531 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.531 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.531 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.531 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.531 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.790 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.790 "name": "Existed_Raid", 00:08:08.790 "uuid": "5cf2527e-4c20-491a-b12f-b6baabe3a4e6", 00:08:08.790 "strip_size_kb": 64, 00:08:08.790 "state": "configuring", 00:08:08.790 "raid_level": "raid0", 00:08:08.790 "superblock": true, 00:08:08.790 "num_base_bdevs": 2, 00:08:08.790 "num_base_bdevs_discovered": 1, 00:08:08.790 "num_base_bdevs_operational": 2, 00:08:08.790 "base_bdevs_list": [ 00:08:08.790 { 00:08:08.790 "name": "BaseBdev1", 00:08:08.790 "uuid": "0f00a5eb-925b-4e9f-b0ae-80288beabdb8", 00:08:08.790 "is_configured": true, 00:08:08.790 "data_offset": 2048, 00:08:08.790 "data_size": 63488 00:08:08.790 }, 00:08:08.790 { 00:08:08.790 "name": "BaseBdev2", 00:08:08.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.790 "is_configured": false, 00:08:08.790 "data_offset": 0, 00:08:08.790 "data_size": 0 00:08:08.790 } 00:08:08.790 ] 00:08:08.790 }' 00:08:08.790 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.790 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.049 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:09.049 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.049 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.049 [2024-11-21 03:16:56.477611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:09.049 [2024-11-21 03:16:56.477837] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:09.049 [2024-11-21 03:16:56.477862] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:09.049 [2024-11-21 03:16:56.478165] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:08:09.050 [2024-11-21 03:16:56.478331] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:09.050 [2024-11-21 03:16:56.478348] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:08:09.050 [2024-11-21 03:16:56.478474] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.050 BaseBdev2 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.050 [ 00:08:09.050 { 00:08:09.050 "name": "BaseBdev2", 00:08:09.050 "aliases": [ 00:08:09.050 "73bcd13c-4e28-4d7d-ab18-e5dbe85ac584" 00:08:09.050 ], 00:08:09.050 "product_name": "Malloc disk", 00:08:09.050 "block_size": 512, 00:08:09.050 "num_blocks": 65536, 00:08:09.050 "uuid": "73bcd13c-4e28-4d7d-ab18-e5dbe85ac584", 00:08:09.050 "assigned_rate_limits": { 00:08:09.050 "rw_ios_per_sec": 0, 00:08:09.050 "rw_mbytes_per_sec": 0, 00:08:09.050 "r_mbytes_per_sec": 0, 00:08:09.050 "w_mbytes_per_sec": 0 00:08:09.050 }, 00:08:09.050 "claimed": true, 00:08:09.050 "claim_type": "exclusive_write", 00:08:09.050 "zoned": false, 00:08:09.050 "supported_io_types": { 00:08:09.050 "read": true, 00:08:09.050 "write": true, 00:08:09.050 "unmap": true, 00:08:09.050 "flush": true, 00:08:09.050 "reset": true, 00:08:09.050 "nvme_admin": false, 00:08:09.050 "nvme_io": false, 00:08:09.050 "nvme_io_md": false, 00:08:09.050 "write_zeroes": true, 00:08:09.050 "zcopy": true, 00:08:09.050 "get_zone_info": false, 00:08:09.050 "zone_management": false, 00:08:09.050 "zone_append": false, 00:08:09.050 "compare": false, 00:08:09.050 "compare_and_write": false, 00:08:09.050 "abort": true, 00:08:09.050 "seek_hole": false, 00:08:09.050 "seek_data": false, 00:08:09.050 "copy": true, 00:08:09.050 "nvme_iov_md": false 00:08:09.050 }, 00:08:09.050 "memory_domains": [ 00:08:09.050 { 00:08:09.050 "dma_device_id": "system", 00:08:09.050 "dma_device_type": 1 00:08:09.050 }, 00:08:09.050 { 00:08:09.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.050 "dma_device_type": 2 00:08:09.050 } 00:08:09.050 ], 00:08:09.050 "driver_specific": {} 00:08:09.050 } 00:08:09.050 ] 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.050 "name": "Existed_Raid", 00:08:09.050 "uuid": "5cf2527e-4c20-491a-b12f-b6baabe3a4e6", 00:08:09.050 "strip_size_kb": 64, 00:08:09.050 "state": "online", 00:08:09.050 "raid_level": "raid0", 00:08:09.050 "superblock": true, 00:08:09.050 "num_base_bdevs": 2, 00:08:09.050 "num_base_bdevs_discovered": 2, 00:08:09.050 "num_base_bdevs_operational": 2, 00:08:09.050 "base_bdevs_list": [ 00:08:09.050 { 00:08:09.050 "name": "BaseBdev1", 00:08:09.050 "uuid": "0f00a5eb-925b-4e9f-b0ae-80288beabdb8", 00:08:09.050 "is_configured": true, 00:08:09.050 "data_offset": 2048, 00:08:09.050 "data_size": 63488 00:08:09.050 }, 00:08:09.050 { 00:08:09.050 "name": "BaseBdev2", 00:08:09.050 "uuid": "73bcd13c-4e28-4d7d-ab18-e5dbe85ac584", 00:08:09.050 "is_configured": true, 00:08:09.050 "data_offset": 2048, 00:08:09.050 "data_size": 63488 00:08:09.050 } 00:08:09.050 ] 00:08:09.050 }' 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.050 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.616 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:09.616 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:09.616 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:09.616 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:09.616 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:09.616 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:09.616 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:09.616 03:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:09.616 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.616 03:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.616 [2024-11-21 03:16:56.998210] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.616 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.616 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:09.616 "name": "Existed_Raid", 00:08:09.616 "aliases": [ 00:08:09.616 "5cf2527e-4c20-491a-b12f-b6baabe3a4e6" 00:08:09.616 ], 00:08:09.616 "product_name": "Raid Volume", 00:08:09.616 "block_size": 512, 00:08:09.616 "num_blocks": 126976, 00:08:09.616 "uuid": "5cf2527e-4c20-491a-b12f-b6baabe3a4e6", 00:08:09.616 "assigned_rate_limits": { 00:08:09.616 "rw_ios_per_sec": 0, 00:08:09.616 "rw_mbytes_per_sec": 0, 00:08:09.616 "r_mbytes_per_sec": 0, 00:08:09.616 "w_mbytes_per_sec": 0 00:08:09.616 }, 00:08:09.616 "claimed": false, 00:08:09.616 "zoned": false, 00:08:09.616 "supported_io_types": { 00:08:09.616 "read": true, 00:08:09.616 "write": true, 00:08:09.616 "unmap": true, 00:08:09.616 "flush": true, 00:08:09.616 "reset": true, 00:08:09.616 "nvme_admin": false, 00:08:09.616 "nvme_io": false, 00:08:09.616 "nvme_io_md": false, 00:08:09.616 "write_zeroes": true, 00:08:09.616 "zcopy": false, 00:08:09.616 "get_zone_info": false, 00:08:09.616 "zone_management": false, 00:08:09.616 "zone_append": false, 00:08:09.616 "compare": false, 00:08:09.616 "compare_and_write": false, 00:08:09.616 "abort": false, 00:08:09.616 "seek_hole": false, 00:08:09.616 "seek_data": false, 00:08:09.616 "copy": false, 00:08:09.616 "nvme_iov_md": false 00:08:09.616 }, 00:08:09.616 "memory_domains": [ 00:08:09.616 { 00:08:09.616 "dma_device_id": "system", 00:08:09.616 "dma_device_type": 1 00:08:09.616 }, 00:08:09.616 { 00:08:09.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.616 "dma_device_type": 2 00:08:09.616 }, 00:08:09.616 { 00:08:09.616 "dma_device_id": "system", 00:08:09.616 "dma_device_type": 1 00:08:09.616 }, 00:08:09.616 { 00:08:09.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.616 "dma_device_type": 2 00:08:09.616 } 00:08:09.616 ], 00:08:09.616 "driver_specific": { 00:08:09.616 "raid": { 00:08:09.616 "uuid": "5cf2527e-4c20-491a-b12f-b6baabe3a4e6", 00:08:09.616 "strip_size_kb": 64, 00:08:09.616 "state": "online", 00:08:09.616 "raid_level": "raid0", 00:08:09.616 "superblock": true, 00:08:09.616 "num_base_bdevs": 2, 00:08:09.616 "num_base_bdevs_discovered": 2, 00:08:09.616 "num_base_bdevs_operational": 2, 00:08:09.616 "base_bdevs_list": [ 00:08:09.616 { 00:08:09.616 "name": "BaseBdev1", 00:08:09.616 "uuid": "0f00a5eb-925b-4e9f-b0ae-80288beabdb8", 00:08:09.616 "is_configured": true, 00:08:09.616 "data_offset": 2048, 00:08:09.616 "data_size": 63488 00:08:09.616 }, 00:08:09.616 { 00:08:09.616 "name": "BaseBdev2", 00:08:09.616 "uuid": "73bcd13c-4e28-4d7d-ab18-e5dbe85ac584", 00:08:09.616 "is_configured": true, 00:08:09.616 "data_offset": 2048, 00:08:09.616 "data_size": 63488 00:08:09.616 } 00:08:09.616 ] 00:08:09.616 } 00:08:09.616 } 00:08:09.616 }' 00:08:09.616 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:09.616 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:09.616 BaseBdev2' 00:08:09.616 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.616 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:09.616 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.616 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:09.616 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.616 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.616 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.616 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.616 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.616 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.616 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.616 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:09.616 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.616 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.616 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.874 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.874 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.874 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.874 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:09.874 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.874 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.874 [2024-11-21 03:16:57.222041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:09.874 [2024-11-21 03:16:57.222142] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:09.874 [2024-11-21 03:16:57.222240] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:09.874 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.874 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:09.874 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:09.874 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:09.874 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:09.874 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:09.874 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:09.874 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.874 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:09.874 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.874 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.874 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:09.874 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.874 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.874 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.874 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.874 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.874 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.874 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.874 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.874 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.874 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.874 "name": "Existed_Raid", 00:08:09.874 "uuid": "5cf2527e-4c20-491a-b12f-b6baabe3a4e6", 00:08:09.874 "strip_size_kb": 64, 00:08:09.874 "state": "offline", 00:08:09.874 "raid_level": "raid0", 00:08:09.874 "superblock": true, 00:08:09.874 "num_base_bdevs": 2, 00:08:09.874 "num_base_bdevs_discovered": 1, 00:08:09.874 "num_base_bdevs_operational": 1, 00:08:09.874 "base_bdevs_list": [ 00:08:09.874 { 00:08:09.874 "name": null, 00:08:09.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.874 "is_configured": false, 00:08:09.874 "data_offset": 0, 00:08:09.874 "data_size": 63488 00:08:09.874 }, 00:08:09.874 { 00:08:09.874 "name": "BaseBdev2", 00:08:09.874 "uuid": "73bcd13c-4e28-4d7d-ab18-e5dbe85ac584", 00:08:09.874 "is_configured": true, 00:08:09.874 "data_offset": 2048, 00:08:09.874 "data_size": 63488 00:08:09.874 } 00:08:09.874 ] 00:08:09.874 }' 00:08:09.874 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.874 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.133 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:10.133 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:10.133 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.133 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:10.133 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.133 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.133 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.133 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:10.133 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:10.133 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:10.133 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.133 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.133 [2024-11-21 03:16:57.666133] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:10.133 [2024-11-21 03:16:57.666216] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:08:10.133 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.133 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:10.133 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:10.133 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.133 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:10.133 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.133 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.133 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.392 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:10.392 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:10.392 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:10.392 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74347 00:08:10.392 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74347 ']' 00:08:10.392 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74347 00:08:10.392 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:10.392 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:10.392 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74347 00:08:10.392 killing process with pid 74347 00:08:10.392 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:10.392 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:10.392 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74347' 00:08:10.392 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74347 00:08:10.392 [2024-11-21 03:16:57.768494] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:10.392 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74347 00:08:10.392 [2024-11-21 03:16:57.769600] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:10.651 ************************************ 00:08:10.651 END TEST raid_state_function_test_sb 00:08:10.651 ************************************ 00:08:10.651 03:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:10.651 00:08:10.651 real 0m3.828s 00:08:10.651 user 0m5.987s 00:08:10.651 sys 0m0.820s 00:08:10.651 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.651 03:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.651 03:16:58 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:08:10.651 03:16:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:10.651 03:16:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.651 03:16:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:10.651 ************************************ 00:08:10.651 START TEST raid_superblock_test 00:08:10.651 ************************************ 00:08:10.651 03:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:08:10.651 03:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:10.651 03:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:10.651 03:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:10.651 03:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:10.651 03:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:10.651 03:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:10.651 03:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:10.651 03:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:10.651 03:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:10.652 03:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:10.652 03:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:10.652 03:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:10.652 03:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:10.652 03:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:10.652 03:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:10.652 03:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:10.652 03:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74583 00:08:10.652 03:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:10.652 03:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74583 00:08:10.652 03:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74583 ']' 00:08:10.652 03:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.652 03:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.652 03:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.652 03:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.652 03:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.652 [2024-11-21 03:16:58.154533] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:08:10.652 [2024-11-21 03:16:58.154752] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74583 ] 00:08:10.910 [2024-11-21 03:16:58.292733] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:10.910 [2024-11-21 03:16:58.331496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.910 [2024-11-21 03:16:58.362270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.910 [2024-11-21 03:16:58.405456] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.910 [2024-11-21 03:16:58.405589] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.478 03:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.478 03:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:11.478 03:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:11.478 03:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:11.478 03:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:11.478 03:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:11.478 03:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:11.478 03:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:11.478 03:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:11.478 03:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:11.478 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:11.478 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.478 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.478 malloc1 00:08:11.478 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.478 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:11.478 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.478 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.478 [2024-11-21 03:16:59.026158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:11.478 [2024-11-21 03:16:59.026320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.478 [2024-11-21 03:16:59.026377] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:11.478 [2024-11-21 03:16:59.026418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.478 [2024-11-21 03:16:59.028963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.478 [2024-11-21 03:16:59.029085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:11.478 pt1 00:08:11.478 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.478 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:11.478 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:11.478 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:11.478 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:11.478 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:11.478 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:11.478 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:11.478 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:11.478 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:11.478 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.478 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.737 malloc2 00:08:11.737 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.737 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:11.737 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.737 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.737 [2024-11-21 03:16:59.060017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:11.737 [2024-11-21 03:16:59.060107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.737 [2024-11-21 03:16:59.060142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:11.737 [2024-11-21 03:16:59.060153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.737 [2024-11-21 03:16:59.062605] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.737 [2024-11-21 03:16:59.062649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:11.737 pt2 00:08:11.737 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.737 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:11.737 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:11.737 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:11.737 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.737 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.737 [2024-11-21 03:16:59.072089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:11.737 [2024-11-21 03:16:59.074245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:11.737 [2024-11-21 03:16:59.074417] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:08:11.737 [2024-11-21 03:16:59.074432] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:11.737 [2024-11-21 03:16:59.074747] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:08:11.737 [2024-11-21 03:16:59.074878] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:08:11.737 [2024-11-21 03:16:59.074899] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:08:11.737 [2024-11-21 03:16:59.075084] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.737 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.737 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:11.737 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:11.737 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:11.737 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.737 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.737 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:11.737 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.737 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.737 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.737 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.737 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.737 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.737 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.737 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:11.737 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.737 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.737 "name": "raid_bdev1", 00:08:11.737 "uuid": "0adaa9f2-a049-42b2-be29-4b0f223d6131", 00:08:11.737 "strip_size_kb": 64, 00:08:11.737 "state": "online", 00:08:11.737 "raid_level": "raid0", 00:08:11.737 "superblock": true, 00:08:11.737 "num_base_bdevs": 2, 00:08:11.737 "num_base_bdevs_discovered": 2, 00:08:11.738 "num_base_bdevs_operational": 2, 00:08:11.738 "base_bdevs_list": [ 00:08:11.738 { 00:08:11.738 "name": "pt1", 00:08:11.738 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:11.738 "is_configured": true, 00:08:11.738 "data_offset": 2048, 00:08:11.738 "data_size": 63488 00:08:11.738 }, 00:08:11.738 { 00:08:11.738 "name": "pt2", 00:08:11.738 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:11.738 "is_configured": true, 00:08:11.738 "data_offset": 2048, 00:08:11.738 "data_size": 63488 00:08:11.738 } 00:08:11.738 ] 00:08:11.738 }' 00:08:11.738 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.738 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.996 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:11.996 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:11.996 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:11.996 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:11.996 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:11.996 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:11.996 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:11.996 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:11.996 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.996 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.996 [2024-11-21 03:16:59.524539] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:11.996 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.996 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:11.996 "name": "raid_bdev1", 00:08:11.996 "aliases": [ 00:08:11.996 "0adaa9f2-a049-42b2-be29-4b0f223d6131" 00:08:11.996 ], 00:08:11.996 "product_name": "Raid Volume", 00:08:11.996 "block_size": 512, 00:08:11.996 "num_blocks": 126976, 00:08:11.996 "uuid": "0adaa9f2-a049-42b2-be29-4b0f223d6131", 00:08:11.996 "assigned_rate_limits": { 00:08:11.996 "rw_ios_per_sec": 0, 00:08:11.996 "rw_mbytes_per_sec": 0, 00:08:11.996 "r_mbytes_per_sec": 0, 00:08:11.996 "w_mbytes_per_sec": 0 00:08:11.996 }, 00:08:11.996 "claimed": false, 00:08:11.996 "zoned": false, 00:08:11.996 "supported_io_types": { 00:08:11.996 "read": true, 00:08:11.996 "write": true, 00:08:11.996 "unmap": true, 00:08:11.996 "flush": true, 00:08:11.996 "reset": true, 00:08:11.996 "nvme_admin": false, 00:08:11.996 "nvme_io": false, 00:08:11.996 "nvme_io_md": false, 00:08:11.996 "write_zeroes": true, 00:08:11.996 "zcopy": false, 00:08:11.996 "get_zone_info": false, 00:08:11.996 "zone_management": false, 00:08:11.996 "zone_append": false, 00:08:11.996 "compare": false, 00:08:11.996 "compare_and_write": false, 00:08:11.996 "abort": false, 00:08:11.996 "seek_hole": false, 00:08:11.996 "seek_data": false, 00:08:11.996 "copy": false, 00:08:11.996 "nvme_iov_md": false 00:08:11.996 }, 00:08:11.996 "memory_domains": [ 00:08:11.996 { 00:08:11.996 "dma_device_id": "system", 00:08:11.996 "dma_device_type": 1 00:08:11.996 }, 00:08:11.996 { 00:08:11.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.996 "dma_device_type": 2 00:08:11.996 }, 00:08:11.996 { 00:08:11.996 "dma_device_id": "system", 00:08:11.996 "dma_device_type": 1 00:08:11.996 }, 00:08:11.996 { 00:08:11.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.996 "dma_device_type": 2 00:08:11.996 } 00:08:11.996 ], 00:08:11.996 "driver_specific": { 00:08:11.996 "raid": { 00:08:11.996 "uuid": "0adaa9f2-a049-42b2-be29-4b0f223d6131", 00:08:11.996 "strip_size_kb": 64, 00:08:11.996 "state": "online", 00:08:11.996 "raid_level": "raid0", 00:08:11.996 "superblock": true, 00:08:11.996 "num_base_bdevs": 2, 00:08:11.996 "num_base_bdevs_discovered": 2, 00:08:11.996 "num_base_bdevs_operational": 2, 00:08:11.996 "base_bdevs_list": [ 00:08:11.996 { 00:08:11.996 "name": "pt1", 00:08:11.997 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:11.997 "is_configured": true, 00:08:11.997 "data_offset": 2048, 00:08:11.997 "data_size": 63488 00:08:11.997 }, 00:08:11.997 { 00:08:11.997 "name": "pt2", 00:08:11.997 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:11.997 "is_configured": true, 00:08:11.997 "data_offset": 2048, 00:08:11.997 "data_size": 63488 00:08:11.997 } 00:08:11.997 ] 00:08:11.997 } 00:08:11.997 } 00:08:11.997 }' 00:08:11.997 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:12.255 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:12.255 pt2' 00:08:12.255 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.255 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:12.255 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.255 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.255 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:12.255 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.255 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.255 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.256 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.256 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.256 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.256 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:12.256 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.256 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.256 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.256 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.256 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.256 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.256 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:12.256 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:12.256 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.256 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.256 [2024-11-21 03:16:59.744533] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:12.256 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.256 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0adaa9f2-a049-42b2-be29-4b0f223d6131 00:08:12.256 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0adaa9f2-a049-42b2-be29-4b0f223d6131 ']' 00:08:12.256 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:12.256 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.256 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.256 [2024-11-21 03:16:59.780257] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:12.256 [2024-11-21 03:16:59.780296] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:12.256 [2024-11-21 03:16:59.780400] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:12.256 [2024-11-21 03:16:59.780457] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:12.256 [2024-11-21 03:16:59.780475] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:08:12.256 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.256 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.256 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.256 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.256 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:12.256 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.515 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:12.515 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:12.515 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:12.515 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:12.515 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.515 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.515 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.515 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:12.515 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:12.515 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.515 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.515 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.515 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:12.515 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:12.515 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.515 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.515 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.515 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:12.515 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:12.515 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:12.515 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:12.515 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:12.515 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:12.515 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:12.515 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:12.515 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:12.515 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.515 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.515 [2024-11-21 03:16:59.916358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:12.515 [2024-11-21 03:16:59.918571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:12.515 [2024-11-21 03:16:59.918651] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:12.515 [2024-11-21 03:16:59.918709] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:12.515 [2024-11-21 03:16:59.918726] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:12.515 [2024-11-21 03:16:59.918739] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:08:12.515 request: 00:08:12.515 { 00:08:12.515 "name": "raid_bdev1", 00:08:12.515 "raid_level": "raid0", 00:08:12.515 "base_bdevs": [ 00:08:12.515 "malloc1", 00:08:12.515 "malloc2" 00:08:12.515 ], 00:08:12.515 "strip_size_kb": 64, 00:08:12.515 "superblock": false, 00:08:12.515 "method": "bdev_raid_create", 00:08:12.515 "req_id": 1 00:08:12.515 } 00:08:12.515 Got JSON-RPC error response 00:08:12.515 response: 00:08:12.515 { 00:08:12.515 "code": -17, 00:08:12.516 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:12.516 } 00:08:12.516 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:12.516 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:12.516 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:12.516 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:12.516 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:12.516 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.516 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.516 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.516 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:12.516 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.516 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:12.516 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:12.516 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:12.516 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.516 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.516 [2024-11-21 03:16:59.984342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:12.516 [2024-11-21 03:16:59.984485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:12.516 [2024-11-21 03:16:59.984526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:12.516 [2024-11-21 03:16:59.984576] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:12.516 [2024-11-21 03:16:59.987022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:12.516 [2024-11-21 03:16:59.987134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:12.516 [2024-11-21 03:16:59.987253] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:12.516 [2024-11-21 03:16:59.987331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:12.516 pt1 00:08:12.516 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.516 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:08:12.516 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:12.516 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:12.516 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.516 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.516 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.516 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.516 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.516 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.516 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.516 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:12.516 03:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.516 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.516 03:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.516 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.516 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.516 "name": "raid_bdev1", 00:08:12.516 "uuid": "0adaa9f2-a049-42b2-be29-4b0f223d6131", 00:08:12.516 "strip_size_kb": 64, 00:08:12.516 "state": "configuring", 00:08:12.516 "raid_level": "raid0", 00:08:12.516 "superblock": true, 00:08:12.516 "num_base_bdevs": 2, 00:08:12.516 "num_base_bdevs_discovered": 1, 00:08:12.516 "num_base_bdevs_operational": 2, 00:08:12.516 "base_bdevs_list": [ 00:08:12.516 { 00:08:12.516 "name": "pt1", 00:08:12.516 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:12.516 "is_configured": true, 00:08:12.516 "data_offset": 2048, 00:08:12.516 "data_size": 63488 00:08:12.516 }, 00:08:12.516 { 00:08:12.516 "name": null, 00:08:12.516 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:12.516 "is_configured": false, 00:08:12.516 "data_offset": 2048, 00:08:12.516 "data_size": 63488 00:08:12.516 } 00:08:12.516 ] 00:08:12.516 }' 00:08:12.516 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.516 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.083 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:13.083 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:13.083 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:13.083 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:13.083 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.083 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.083 [2024-11-21 03:17:00.404440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:13.083 [2024-11-21 03:17:00.404598] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.083 [2024-11-21 03:17:00.404646] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:13.083 [2024-11-21 03:17:00.404701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.083 [2024-11-21 03:17:00.405213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.083 [2024-11-21 03:17:00.405288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:13.083 [2024-11-21 03:17:00.405404] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:13.083 [2024-11-21 03:17:00.405465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:13.084 [2024-11-21 03:17:00.405596] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:13.084 [2024-11-21 03:17:00.405640] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:13.084 [2024-11-21 03:17:00.405923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:13.084 [2024-11-21 03:17:00.406115] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:13.084 [2024-11-21 03:17:00.406163] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:08:13.084 [2024-11-21 03:17:00.406319] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.084 pt2 00:08:13.084 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.084 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:13.084 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:13.084 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:13.084 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:13.084 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:13.084 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.084 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.084 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.084 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.084 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.084 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.084 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.084 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.084 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.084 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.084 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:13.084 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.084 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.084 "name": "raid_bdev1", 00:08:13.084 "uuid": "0adaa9f2-a049-42b2-be29-4b0f223d6131", 00:08:13.084 "strip_size_kb": 64, 00:08:13.084 "state": "online", 00:08:13.084 "raid_level": "raid0", 00:08:13.084 "superblock": true, 00:08:13.084 "num_base_bdevs": 2, 00:08:13.084 "num_base_bdevs_discovered": 2, 00:08:13.084 "num_base_bdevs_operational": 2, 00:08:13.084 "base_bdevs_list": [ 00:08:13.084 { 00:08:13.084 "name": "pt1", 00:08:13.084 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:13.084 "is_configured": true, 00:08:13.084 "data_offset": 2048, 00:08:13.084 "data_size": 63488 00:08:13.084 }, 00:08:13.084 { 00:08:13.084 "name": "pt2", 00:08:13.084 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:13.084 "is_configured": true, 00:08:13.084 "data_offset": 2048, 00:08:13.084 "data_size": 63488 00:08:13.084 } 00:08:13.084 ] 00:08:13.084 }' 00:08:13.084 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.084 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.343 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:13.343 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:13.343 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:13.343 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:13.343 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:13.343 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:13.343 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:13.343 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.343 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.343 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:13.343 [2024-11-21 03:17:00.864883] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.343 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.343 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:13.343 "name": "raid_bdev1", 00:08:13.343 "aliases": [ 00:08:13.343 "0adaa9f2-a049-42b2-be29-4b0f223d6131" 00:08:13.343 ], 00:08:13.343 "product_name": "Raid Volume", 00:08:13.343 "block_size": 512, 00:08:13.343 "num_blocks": 126976, 00:08:13.343 "uuid": "0adaa9f2-a049-42b2-be29-4b0f223d6131", 00:08:13.343 "assigned_rate_limits": { 00:08:13.343 "rw_ios_per_sec": 0, 00:08:13.343 "rw_mbytes_per_sec": 0, 00:08:13.343 "r_mbytes_per_sec": 0, 00:08:13.343 "w_mbytes_per_sec": 0 00:08:13.343 }, 00:08:13.343 "claimed": false, 00:08:13.343 "zoned": false, 00:08:13.343 "supported_io_types": { 00:08:13.343 "read": true, 00:08:13.343 "write": true, 00:08:13.343 "unmap": true, 00:08:13.343 "flush": true, 00:08:13.343 "reset": true, 00:08:13.343 "nvme_admin": false, 00:08:13.343 "nvme_io": false, 00:08:13.343 "nvme_io_md": false, 00:08:13.343 "write_zeroes": true, 00:08:13.343 "zcopy": false, 00:08:13.343 "get_zone_info": false, 00:08:13.343 "zone_management": false, 00:08:13.343 "zone_append": false, 00:08:13.343 "compare": false, 00:08:13.343 "compare_and_write": false, 00:08:13.343 "abort": false, 00:08:13.343 "seek_hole": false, 00:08:13.343 "seek_data": false, 00:08:13.343 "copy": false, 00:08:13.343 "nvme_iov_md": false 00:08:13.343 }, 00:08:13.343 "memory_domains": [ 00:08:13.343 { 00:08:13.343 "dma_device_id": "system", 00:08:13.343 "dma_device_type": 1 00:08:13.343 }, 00:08:13.343 { 00:08:13.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.343 "dma_device_type": 2 00:08:13.343 }, 00:08:13.343 { 00:08:13.343 "dma_device_id": "system", 00:08:13.343 "dma_device_type": 1 00:08:13.343 }, 00:08:13.343 { 00:08:13.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.344 "dma_device_type": 2 00:08:13.344 } 00:08:13.344 ], 00:08:13.344 "driver_specific": { 00:08:13.344 "raid": { 00:08:13.344 "uuid": "0adaa9f2-a049-42b2-be29-4b0f223d6131", 00:08:13.344 "strip_size_kb": 64, 00:08:13.344 "state": "online", 00:08:13.344 "raid_level": "raid0", 00:08:13.344 "superblock": true, 00:08:13.344 "num_base_bdevs": 2, 00:08:13.344 "num_base_bdevs_discovered": 2, 00:08:13.344 "num_base_bdevs_operational": 2, 00:08:13.344 "base_bdevs_list": [ 00:08:13.344 { 00:08:13.344 "name": "pt1", 00:08:13.344 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:13.344 "is_configured": true, 00:08:13.344 "data_offset": 2048, 00:08:13.344 "data_size": 63488 00:08:13.344 }, 00:08:13.344 { 00:08:13.344 "name": "pt2", 00:08:13.344 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:13.344 "is_configured": true, 00:08:13.344 "data_offset": 2048, 00:08:13.344 "data_size": 63488 00:08:13.344 } 00:08:13.344 ] 00:08:13.344 } 00:08:13.344 } 00:08:13.344 }' 00:08:13.344 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:13.603 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:13.603 pt2' 00:08:13.603 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.603 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:13.603 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.603 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.603 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:13.603 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.603 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.603 03:17:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.603 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.603 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.603 03:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.603 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.603 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:13.603 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.603 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.603 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.603 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.603 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.603 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:13.603 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:13.603 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.603 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.603 [2024-11-21 03:17:01.052960] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.603 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.603 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0adaa9f2-a049-42b2-be29-4b0f223d6131 '!=' 0adaa9f2-a049-42b2-be29-4b0f223d6131 ']' 00:08:13.603 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:13.603 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:13.603 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:13.603 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74583 00:08:13.603 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74583 ']' 00:08:13.603 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74583 00:08:13.603 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:13.603 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:13.603 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74583 00:08:13.603 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:13.603 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:13.603 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74583' 00:08:13.603 killing process with pid 74583 00:08:13.603 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74583 00:08:13.603 [2024-11-21 03:17:01.142578] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:13.603 [2024-11-21 03:17:01.142779] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:13.603 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74583 00:08:13.603 [2024-11-21 03:17:01.142872] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:13.603 [2024-11-21 03:17:01.142889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:08:13.878 [2024-11-21 03:17:01.167858] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:13.878 03:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:13.878 00:08:13.878 real 0m3.328s 00:08:13.878 user 0m5.080s 00:08:13.878 sys 0m0.744s 00:08:13.878 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.878 ************************************ 00:08:13.878 END TEST raid_superblock_test 00:08:13.878 ************************************ 00:08:13.878 03:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.138 03:17:01 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:08:14.138 03:17:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:14.138 03:17:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.138 03:17:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:14.138 ************************************ 00:08:14.138 START TEST raid_read_error_test 00:08:14.138 ************************************ 00:08:14.138 03:17:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:08:14.138 03:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:14.138 03:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:14.138 03:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:14.138 03:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:14.138 03:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:14.138 03:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:14.138 03:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:14.138 03:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:14.138 03:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:14.138 03:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:14.138 03:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:14.138 03:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:14.138 03:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:14.138 03:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:14.138 03:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:14.138 03:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:14.138 03:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:14.138 03:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:14.138 03:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:14.138 03:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:14.138 03:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:14.138 03:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:14.138 03:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.PmCs5HZ9HM 00:08:14.138 03:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74789 00:08:14.138 03:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:14.138 03:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74789 00:08:14.139 03:17:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74789 ']' 00:08:14.139 03:17:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.139 03:17:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:14.139 03:17:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.139 03:17:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:14.139 03:17:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.139 [2024-11-21 03:17:01.570036] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:08:14.139 [2024-11-21 03:17:01.570287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74789 ] 00:08:14.397 [2024-11-21 03:17:01.710676] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:14.397 [2024-11-21 03:17:01.749584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.397 [2024-11-21 03:17:01.779985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.397 [2024-11-21 03:17:01.823456] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.397 [2024-11-21 03:17:01.823585] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.964 03:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:14.964 03:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:14.964 03:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:14.964 03:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:14.964 03:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.964 03:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.964 BaseBdev1_malloc 00:08:14.964 03:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.964 03:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:14.964 03:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.964 03:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.964 true 00:08:14.964 03:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.964 03:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:14.964 03:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.964 03:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.964 [2024-11-21 03:17:02.524118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:14.964 [2024-11-21 03:17:02.524216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.964 [2024-11-21 03:17:02.524244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:14.964 [2024-11-21 03:17:02.524259] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.964 [2024-11-21 03:17:02.526886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.964 [2024-11-21 03:17:02.526958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:15.224 BaseBdev1 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.224 BaseBdev2_malloc 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.224 true 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.224 [2024-11-21 03:17:02.565279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:15.224 [2024-11-21 03:17:02.565422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.224 [2024-11-21 03:17:02.565449] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:15.224 [2024-11-21 03:17:02.565461] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.224 [2024-11-21 03:17:02.567889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.224 [2024-11-21 03:17:02.567938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:15.224 BaseBdev2 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.224 [2024-11-21 03:17:02.577303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:15.224 [2024-11-21 03:17:02.579438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:15.224 [2024-11-21 03:17:02.579647] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:15.224 [2024-11-21 03:17:02.579664] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:15.224 [2024-11-21 03:17:02.580003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:08:15.224 [2024-11-21 03:17:02.580201] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:15.224 [2024-11-21 03:17:02.580213] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:08:15.224 [2024-11-21 03:17:02.580399] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.224 "name": "raid_bdev1", 00:08:15.224 "uuid": "b3633936-2ecc-44b1-8783-3cc82fd8aaae", 00:08:15.224 "strip_size_kb": 64, 00:08:15.224 "state": "online", 00:08:15.224 "raid_level": "raid0", 00:08:15.224 "superblock": true, 00:08:15.224 "num_base_bdevs": 2, 00:08:15.224 "num_base_bdevs_discovered": 2, 00:08:15.224 "num_base_bdevs_operational": 2, 00:08:15.224 "base_bdevs_list": [ 00:08:15.224 { 00:08:15.224 "name": "BaseBdev1", 00:08:15.224 "uuid": "6aa18165-d719-5e36-b23c-02ea9f9bdeda", 00:08:15.224 "is_configured": true, 00:08:15.224 "data_offset": 2048, 00:08:15.224 "data_size": 63488 00:08:15.224 }, 00:08:15.224 { 00:08:15.224 "name": "BaseBdev2", 00:08:15.224 "uuid": "68eff537-ef3e-5b45-846a-e91a9c46aac8", 00:08:15.224 "is_configured": true, 00:08:15.224 "data_offset": 2048, 00:08:15.224 "data_size": 63488 00:08:15.224 } 00:08:15.224 ] 00:08:15.224 }' 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.224 03:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.483 03:17:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:15.483 03:17:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:15.742 [2024-11-21 03:17:03.137896] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:08:16.680 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:16.680 03:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.680 03:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.680 03:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.680 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:16.680 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:16.680 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:16.680 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:16.680 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:16.680 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:16.680 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.680 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.680 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:16.680 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.680 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.680 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.680 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.680 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:16.680 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.680 03:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.680 03:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.680 03:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.680 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.680 "name": "raid_bdev1", 00:08:16.680 "uuid": "b3633936-2ecc-44b1-8783-3cc82fd8aaae", 00:08:16.680 "strip_size_kb": 64, 00:08:16.680 "state": "online", 00:08:16.680 "raid_level": "raid0", 00:08:16.680 "superblock": true, 00:08:16.680 "num_base_bdevs": 2, 00:08:16.680 "num_base_bdevs_discovered": 2, 00:08:16.680 "num_base_bdevs_operational": 2, 00:08:16.680 "base_bdevs_list": [ 00:08:16.680 { 00:08:16.680 "name": "BaseBdev1", 00:08:16.680 "uuid": "6aa18165-d719-5e36-b23c-02ea9f9bdeda", 00:08:16.680 "is_configured": true, 00:08:16.680 "data_offset": 2048, 00:08:16.680 "data_size": 63488 00:08:16.680 }, 00:08:16.680 { 00:08:16.680 "name": "BaseBdev2", 00:08:16.680 "uuid": "68eff537-ef3e-5b45-846a-e91a9c46aac8", 00:08:16.680 "is_configured": true, 00:08:16.680 "data_offset": 2048, 00:08:16.680 "data_size": 63488 00:08:16.680 } 00:08:16.680 ] 00:08:16.680 }' 00:08:16.680 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.680 03:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.940 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:16.941 03:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.941 03:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.941 [2024-11-21 03:17:04.468859] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:16.941 [2024-11-21 03:17:04.468984] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:16.941 [2024-11-21 03:17:04.471794] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:16.941 [2024-11-21 03:17:04.471925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.941 [2024-11-21 03:17:04.471985] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:16.941 [2024-11-21 03:17:04.472052] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:08:16.941 { 00:08:16.941 "results": [ 00:08:16.941 { 00:08:16.941 "job": "raid_bdev1", 00:08:16.941 "core_mask": "0x1", 00:08:16.941 "workload": "randrw", 00:08:16.941 "percentage": 50, 00:08:16.941 "status": "finished", 00:08:16.941 "queue_depth": 1, 00:08:16.941 "io_size": 131072, 00:08:16.941 "runtime": 1.328861, 00:08:16.941 "iops": 14981.250860699502, 00:08:16.941 "mibps": 1872.6563575874377, 00:08:16.941 "io_failed": 1, 00:08:16.941 "io_timeout": 0, 00:08:16.941 "avg_latency_us": 92.58556520199681, 00:08:16.941 "min_latency_us": 28.11470408785845, 00:08:16.941 "max_latency_us": 1677.9569423864725 00:08:16.941 } 00:08:16.941 ], 00:08:16.941 "core_count": 1 00:08:16.941 } 00:08:16.941 03:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.941 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74789 00:08:16.941 03:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74789 ']' 00:08:16.941 03:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74789 00:08:16.941 03:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:16.941 03:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:16.941 03:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74789 00:08:17.201 03:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:17.201 03:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:17.201 03:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74789' 00:08:17.201 killing process with pid 74789 00:08:17.201 03:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74789 00:08:17.201 [2024-11-21 03:17:04.510562] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:17.201 03:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74789 00:08:17.201 [2024-11-21 03:17:04.527193] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:17.201 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.PmCs5HZ9HM 00:08:17.201 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:17.201 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:17.201 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:08:17.201 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:17.201 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:17.201 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:17.201 03:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:08:17.201 00:08:17.201 real 0m3.290s 00:08:17.201 user 0m4.254s 00:08:17.201 sys 0m0.529s 00:08:17.201 03:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.201 03:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.201 ************************************ 00:08:17.201 END TEST raid_read_error_test 00:08:17.201 ************************************ 00:08:17.462 03:17:04 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:08:17.462 03:17:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:17.462 03:17:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.462 03:17:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:17.462 ************************************ 00:08:17.462 START TEST raid_write_error_test 00:08:17.462 ************************************ 00:08:17.462 03:17:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:08:17.462 03:17:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:17.462 03:17:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:17.462 03:17:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:17.462 03:17:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:17.462 03:17:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:17.462 03:17:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:17.462 03:17:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:17.462 03:17:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:17.462 03:17:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:17.462 03:17:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:17.462 03:17:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:17.462 03:17:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:17.462 03:17:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:17.462 03:17:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:17.462 03:17:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:17.462 03:17:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:17.462 03:17:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:17.462 03:17:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:17.462 03:17:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:17.462 03:17:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:17.462 03:17:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:17.462 03:17:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:17.462 03:17:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.UTR0BxZrDT 00:08:17.462 03:17:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74918 00:08:17.462 03:17:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:17.462 03:17:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74918 00:08:17.462 03:17:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 74918 ']' 00:08:17.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.462 03:17:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.462 03:17:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.462 03:17:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.462 03:17:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.462 03:17:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.462 [2024-11-21 03:17:04.937279] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:08:17.462 [2024-11-21 03:17:04.937443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74918 ] 00:08:17.722 [2024-11-21 03:17:05.078049] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:17.722 [2024-11-21 03:17:05.100209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.722 [2024-11-21 03:17:05.130479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.722 [2024-11-21 03:17:05.173820] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.722 [2024-11-21 03:17:05.173869] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.292 03:17:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.292 03:17:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:18.292 03:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:18.292 03:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:18.292 03:17:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.292 03:17:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.292 BaseBdev1_malloc 00:08:18.292 03:17:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.292 03:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:18.292 03:17:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.292 03:17:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.292 true 00:08:18.292 03:17:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.292 03:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:18.292 03:17:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.292 03:17:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.292 [2024-11-21 03:17:05.826643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:18.292 [2024-11-21 03:17:05.826717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.292 [2024-11-21 03:17:05.826739] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:18.292 [2024-11-21 03:17:05.826754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.292 [2024-11-21 03:17:05.829165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.292 [2024-11-21 03:17:05.829266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:18.292 BaseBdev1 00:08:18.292 03:17:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.292 03:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:18.292 03:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:18.292 03:17:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.292 03:17:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.292 BaseBdev2_malloc 00:08:18.292 03:17:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.292 03:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:18.292 03:17:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.292 03:17:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.552 true 00:08:18.552 03:17:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.552 03:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:18.552 03:17:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.552 03:17:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.552 [2024-11-21 03:17:05.867577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:18.552 [2024-11-21 03:17:05.867652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.552 [2024-11-21 03:17:05.867676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:18.552 [2024-11-21 03:17:05.867688] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.552 [2024-11-21 03:17:05.870260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.552 [2024-11-21 03:17:05.870310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:18.552 BaseBdev2 00:08:18.552 03:17:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.552 03:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:18.552 03:17:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.552 03:17:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.552 [2024-11-21 03:17:05.879600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:18.552 [2024-11-21 03:17:05.881873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:18.552 [2024-11-21 03:17:05.882111] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:18.552 [2024-11-21 03:17:05.882128] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:18.552 [2024-11-21 03:17:05.882431] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:08:18.552 [2024-11-21 03:17:05.882617] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:18.552 [2024-11-21 03:17:05.882629] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:08:18.552 [2024-11-21 03:17:05.882814] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:18.552 03:17:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.552 03:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:18.552 03:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:18.552 03:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.552 03:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:18.553 03:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.553 03:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:18.553 03:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.553 03:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.553 03:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.553 03:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.553 03:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.553 03:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:18.553 03:17:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.553 03:17:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.553 03:17:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.553 03:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.553 "name": "raid_bdev1", 00:08:18.553 "uuid": "52b89686-54a6-4156-9969-813c834b8bbc", 00:08:18.553 "strip_size_kb": 64, 00:08:18.553 "state": "online", 00:08:18.553 "raid_level": "raid0", 00:08:18.553 "superblock": true, 00:08:18.553 "num_base_bdevs": 2, 00:08:18.553 "num_base_bdevs_discovered": 2, 00:08:18.553 "num_base_bdevs_operational": 2, 00:08:18.553 "base_bdevs_list": [ 00:08:18.553 { 00:08:18.553 "name": "BaseBdev1", 00:08:18.553 "uuid": "d311d454-9307-546b-8ece-0a7288567408", 00:08:18.553 "is_configured": true, 00:08:18.553 "data_offset": 2048, 00:08:18.553 "data_size": 63488 00:08:18.553 }, 00:08:18.553 { 00:08:18.553 "name": "BaseBdev2", 00:08:18.553 "uuid": "3954f8f5-4244-56fc-916a-f88deb2e3baa", 00:08:18.553 "is_configured": true, 00:08:18.553 "data_offset": 2048, 00:08:18.553 "data_size": 63488 00:08:18.553 } 00:08:18.553 ] 00:08:18.553 }' 00:08:18.553 03:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.553 03:17:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.813 03:17:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:18.813 03:17:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:19.073 [2024-11-21 03:17:06.400203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:08:20.030 03:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:20.030 03:17:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.030 03:17:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.030 03:17:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.030 03:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:20.030 03:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:20.030 03:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:20.030 03:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:20.030 03:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:20.030 03:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.030 03:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.030 03:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.030 03:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:20.030 03:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.030 03:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.030 03:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.030 03:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.030 03:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.030 03:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.030 03:17:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.030 03:17:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.030 03:17:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.030 03:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.030 "name": "raid_bdev1", 00:08:20.030 "uuid": "52b89686-54a6-4156-9969-813c834b8bbc", 00:08:20.030 "strip_size_kb": 64, 00:08:20.030 "state": "online", 00:08:20.030 "raid_level": "raid0", 00:08:20.030 "superblock": true, 00:08:20.030 "num_base_bdevs": 2, 00:08:20.030 "num_base_bdevs_discovered": 2, 00:08:20.030 "num_base_bdevs_operational": 2, 00:08:20.030 "base_bdevs_list": [ 00:08:20.030 { 00:08:20.030 "name": "BaseBdev1", 00:08:20.030 "uuid": "d311d454-9307-546b-8ece-0a7288567408", 00:08:20.030 "is_configured": true, 00:08:20.030 "data_offset": 2048, 00:08:20.030 "data_size": 63488 00:08:20.030 }, 00:08:20.030 { 00:08:20.030 "name": "BaseBdev2", 00:08:20.030 "uuid": "3954f8f5-4244-56fc-916a-f88deb2e3baa", 00:08:20.030 "is_configured": true, 00:08:20.030 "data_offset": 2048, 00:08:20.030 "data_size": 63488 00:08:20.030 } 00:08:20.030 ] 00:08:20.030 }' 00:08:20.030 03:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.030 03:17:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.290 03:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:20.290 03:17:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.290 03:17:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.290 [2024-11-21 03:17:07.751392] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:20.290 [2024-11-21 03:17:07.751532] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:20.290 [2024-11-21 03:17:07.754633] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:20.290 [2024-11-21 03:17:07.754736] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.290 [2024-11-21 03:17:07.754777] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:20.290 [2024-11-21 03:17:07.754792] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:08:20.290 { 00:08:20.290 "results": [ 00:08:20.290 { 00:08:20.290 "job": "raid_bdev1", 00:08:20.290 "core_mask": "0x1", 00:08:20.290 "workload": "randrw", 00:08:20.290 "percentage": 50, 00:08:20.290 "status": "finished", 00:08:20.290 "queue_depth": 1, 00:08:20.290 "io_size": 131072, 00:08:20.290 "runtime": 1.349059, 00:08:20.290 "iops": 14860.728848775332, 00:08:20.290 "mibps": 1857.5911060969165, 00:08:20.290 "io_failed": 1, 00:08:20.290 "io_timeout": 0, 00:08:20.290 "avg_latency_us": 93.26421495436936, 00:08:20.290 "min_latency_us": 28.11470408785845, 00:08:20.290 "max_latency_us": 1670.8167000784451 00:08:20.290 } 00:08:20.290 ], 00:08:20.290 "core_count": 1 00:08:20.290 } 00:08:20.290 03:17:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.290 03:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74918 00:08:20.290 03:17:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 74918 ']' 00:08:20.290 03:17:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 74918 00:08:20.290 03:17:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:20.290 03:17:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.290 03:17:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74918 00:08:20.290 killing process with pid 74918 00:08:20.290 03:17:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:20.290 03:17:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:20.290 03:17:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74918' 00:08:20.290 03:17:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 74918 00:08:20.290 [2024-11-21 03:17:07.803922] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:20.290 03:17:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 74918 00:08:20.290 [2024-11-21 03:17:07.820517] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:20.550 03:17:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.UTR0BxZrDT 00:08:20.550 03:17:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:20.550 03:17:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:20.550 03:17:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:20.550 03:17:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:20.550 03:17:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:20.550 03:17:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:20.550 03:17:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:20.550 00:08:20.550 real 0m3.217s 00:08:20.550 user 0m4.097s 00:08:20.550 sys 0m0.528s 00:08:20.551 03:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.551 03:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.551 ************************************ 00:08:20.551 END TEST raid_write_error_test 00:08:20.551 ************************************ 00:08:20.551 03:17:08 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:20.551 03:17:08 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:08:20.551 03:17:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:20.551 03:17:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.551 03:17:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:20.551 ************************************ 00:08:20.551 START TEST raid_state_function_test 00:08:20.551 ************************************ 00:08:20.551 03:17:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:08:20.551 03:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:20.551 03:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:20.551 03:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:20.551 03:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:20.551 03:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:20.551 03:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:20.551 03:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:20.551 03:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:20.551 03:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:20.551 03:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:20.551 03:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:20.551 03:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:20.551 03:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:20.551 03:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:20.551 03:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:20.551 03:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:20.551 03:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:20.811 03:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:20.811 03:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:20.811 03:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:20.811 03:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:20.811 03:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:20.811 03:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:20.811 03:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=75045 00:08:20.811 03:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:20.811 Process raid pid: 75045 00:08:20.811 03:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75045' 00:08:20.811 03:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 75045 00:08:20.811 03:17:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 75045 ']' 00:08:20.811 03:17:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.811 03:17:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:20.811 03:17:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.811 03:17:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:20.811 03:17:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.812 [2024-11-21 03:17:08.207652] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:08:20.812 [2024-11-21 03:17:08.207796] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:20.812 [2024-11-21 03:17:08.347056] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:21.071 [2024-11-21 03:17:08.383313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.071 [2024-11-21 03:17:08.413860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.072 [2024-11-21 03:17:08.456575] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.072 [2024-11-21 03:17:08.456616] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.640 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:21.641 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:21.641 03:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:21.641 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.641 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.641 [2024-11-21 03:17:09.079778] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:21.641 [2024-11-21 03:17:09.079960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:21.641 [2024-11-21 03:17:09.079997] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:21.641 [2024-11-21 03:17:09.080045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:21.641 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.641 03:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:21.641 03:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.641 03:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.641 03:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:21.641 03:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.641 03:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:21.641 03:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.641 03:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.641 03:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.641 03:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.641 03:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.641 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.641 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.641 03:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.641 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.641 03:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.641 "name": "Existed_Raid", 00:08:21.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.641 "strip_size_kb": 64, 00:08:21.641 "state": "configuring", 00:08:21.641 "raid_level": "concat", 00:08:21.641 "superblock": false, 00:08:21.641 "num_base_bdevs": 2, 00:08:21.641 "num_base_bdevs_discovered": 0, 00:08:21.641 "num_base_bdevs_operational": 2, 00:08:21.641 "base_bdevs_list": [ 00:08:21.641 { 00:08:21.641 "name": "BaseBdev1", 00:08:21.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.641 "is_configured": false, 00:08:21.641 "data_offset": 0, 00:08:21.641 "data_size": 0 00:08:21.641 }, 00:08:21.641 { 00:08:21.641 "name": "BaseBdev2", 00:08:21.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.641 "is_configured": false, 00:08:21.641 "data_offset": 0, 00:08:21.641 "data_size": 0 00:08:21.641 } 00:08:21.641 ] 00:08:21.641 }' 00:08:21.641 03:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.641 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.209 [2024-11-21 03:17:09.535809] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:22.209 [2024-11-21 03:17:09.535940] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.209 [2024-11-21 03:17:09.547857] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:22.209 [2024-11-21 03:17:09.547975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:22.209 [2024-11-21 03:17:09.548023] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:22.209 [2024-11-21 03:17:09.548050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.209 [2024-11-21 03:17:09.564915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.209 BaseBdev1 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.209 [ 00:08:22.209 { 00:08:22.209 "name": "BaseBdev1", 00:08:22.209 "aliases": [ 00:08:22.209 "a019f24b-4f08-4924-aabe-d0c64e0e0e4f" 00:08:22.209 ], 00:08:22.209 "product_name": "Malloc disk", 00:08:22.209 "block_size": 512, 00:08:22.209 "num_blocks": 65536, 00:08:22.209 "uuid": "a019f24b-4f08-4924-aabe-d0c64e0e0e4f", 00:08:22.209 "assigned_rate_limits": { 00:08:22.209 "rw_ios_per_sec": 0, 00:08:22.209 "rw_mbytes_per_sec": 0, 00:08:22.209 "r_mbytes_per_sec": 0, 00:08:22.209 "w_mbytes_per_sec": 0 00:08:22.209 }, 00:08:22.209 "claimed": true, 00:08:22.209 "claim_type": "exclusive_write", 00:08:22.209 "zoned": false, 00:08:22.209 "supported_io_types": { 00:08:22.209 "read": true, 00:08:22.209 "write": true, 00:08:22.209 "unmap": true, 00:08:22.209 "flush": true, 00:08:22.209 "reset": true, 00:08:22.209 "nvme_admin": false, 00:08:22.209 "nvme_io": false, 00:08:22.209 "nvme_io_md": false, 00:08:22.209 "write_zeroes": true, 00:08:22.209 "zcopy": true, 00:08:22.209 "get_zone_info": false, 00:08:22.209 "zone_management": false, 00:08:22.209 "zone_append": false, 00:08:22.209 "compare": false, 00:08:22.209 "compare_and_write": false, 00:08:22.209 "abort": true, 00:08:22.209 "seek_hole": false, 00:08:22.209 "seek_data": false, 00:08:22.209 "copy": true, 00:08:22.209 "nvme_iov_md": false 00:08:22.209 }, 00:08:22.209 "memory_domains": [ 00:08:22.209 { 00:08:22.209 "dma_device_id": "system", 00:08:22.209 "dma_device_type": 1 00:08:22.209 }, 00:08:22.209 { 00:08:22.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.209 "dma_device_type": 2 00:08:22.209 } 00:08:22.209 ], 00:08:22.209 "driver_specific": {} 00:08:22.209 } 00:08:22.209 ] 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.209 03:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.210 03:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.210 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.210 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.210 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.210 03:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.210 "name": "Existed_Raid", 00:08:22.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.210 "strip_size_kb": 64, 00:08:22.210 "state": "configuring", 00:08:22.210 "raid_level": "concat", 00:08:22.210 "superblock": false, 00:08:22.210 "num_base_bdevs": 2, 00:08:22.210 "num_base_bdevs_discovered": 1, 00:08:22.210 "num_base_bdevs_operational": 2, 00:08:22.210 "base_bdevs_list": [ 00:08:22.210 { 00:08:22.210 "name": "BaseBdev1", 00:08:22.210 "uuid": "a019f24b-4f08-4924-aabe-d0c64e0e0e4f", 00:08:22.210 "is_configured": true, 00:08:22.210 "data_offset": 0, 00:08:22.210 "data_size": 65536 00:08:22.210 }, 00:08:22.210 { 00:08:22.210 "name": "BaseBdev2", 00:08:22.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.210 "is_configured": false, 00:08:22.210 "data_offset": 0, 00:08:22.210 "data_size": 0 00:08:22.210 } 00:08:22.210 ] 00:08:22.210 }' 00:08:22.210 03:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.210 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.469 03:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:22.469 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.469 03:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.469 [2024-11-21 03:17:10.001145] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:22.469 [2024-11-21 03:17:10.001329] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:22.469 03:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.469 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:22.469 03:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.469 03:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.470 [2024-11-21 03:17:10.013230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.470 [2024-11-21 03:17:10.015433] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:22.470 [2024-11-21 03:17:10.015523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:22.470 03:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.470 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:22.470 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:22.470 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:22.470 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.470 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.470 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:22.470 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.470 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.470 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.470 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.470 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.470 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.470 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.470 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.470 03:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.470 03:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.728 03:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.728 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.728 "name": "Existed_Raid", 00:08:22.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.728 "strip_size_kb": 64, 00:08:22.729 "state": "configuring", 00:08:22.729 "raid_level": "concat", 00:08:22.729 "superblock": false, 00:08:22.729 "num_base_bdevs": 2, 00:08:22.729 "num_base_bdevs_discovered": 1, 00:08:22.729 "num_base_bdevs_operational": 2, 00:08:22.729 "base_bdevs_list": [ 00:08:22.729 { 00:08:22.729 "name": "BaseBdev1", 00:08:22.729 "uuid": "a019f24b-4f08-4924-aabe-d0c64e0e0e4f", 00:08:22.729 "is_configured": true, 00:08:22.729 "data_offset": 0, 00:08:22.729 "data_size": 65536 00:08:22.729 }, 00:08:22.729 { 00:08:22.729 "name": "BaseBdev2", 00:08:22.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.729 "is_configured": false, 00:08:22.729 "data_offset": 0, 00:08:22.729 "data_size": 0 00:08:22.729 } 00:08:22.729 ] 00:08:22.729 }' 00:08:22.729 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.729 03:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.988 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:22.988 03:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.988 03:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.988 [2024-11-21 03:17:10.512552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:22.988 [2024-11-21 03:17:10.512695] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:22.988 [2024-11-21 03:17:10.512725] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:22.988 [2024-11-21 03:17:10.513052] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:08:22.988 [2024-11-21 03:17:10.513256] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:22.988 [2024-11-21 03:17:10.513303] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:08:22.988 [2024-11-21 03:17:10.513559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.988 BaseBdev2 00:08:22.988 03:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.988 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:22.988 03:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:22.988 03:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:22.988 03:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:22.988 03:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:22.988 03:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:22.988 03:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:22.988 03:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.988 03:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.988 03:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.988 03:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:22.988 03:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.988 03:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.988 [ 00:08:22.988 { 00:08:22.988 "name": "BaseBdev2", 00:08:22.988 "aliases": [ 00:08:22.988 "4fd41417-8475-446d-81a5-a06fab345080" 00:08:22.988 ], 00:08:22.988 "product_name": "Malloc disk", 00:08:22.988 "block_size": 512, 00:08:22.988 "num_blocks": 65536, 00:08:22.988 "uuid": "4fd41417-8475-446d-81a5-a06fab345080", 00:08:22.988 "assigned_rate_limits": { 00:08:22.988 "rw_ios_per_sec": 0, 00:08:22.988 "rw_mbytes_per_sec": 0, 00:08:22.988 "r_mbytes_per_sec": 0, 00:08:22.988 "w_mbytes_per_sec": 0 00:08:22.988 }, 00:08:22.988 "claimed": true, 00:08:22.988 "claim_type": "exclusive_write", 00:08:22.988 "zoned": false, 00:08:22.988 "supported_io_types": { 00:08:22.988 "read": true, 00:08:22.988 "write": true, 00:08:22.988 "unmap": true, 00:08:22.988 "flush": true, 00:08:22.988 "reset": true, 00:08:22.988 "nvme_admin": false, 00:08:22.988 "nvme_io": false, 00:08:22.988 "nvme_io_md": false, 00:08:22.988 "write_zeroes": true, 00:08:22.988 "zcopy": true, 00:08:22.988 "get_zone_info": false, 00:08:22.988 "zone_management": false, 00:08:22.988 "zone_append": false, 00:08:22.988 "compare": false, 00:08:22.988 "compare_and_write": false, 00:08:22.988 "abort": true, 00:08:22.988 "seek_hole": false, 00:08:22.988 "seek_data": false, 00:08:22.988 "copy": true, 00:08:22.988 "nvme_iov_md": false 00:08:22.988 }, 00:08:22.988 "memory_domains": [ 00:08:22.988 { 00:08:22.988 "dma_device_id": "system", 00:08:22.988 "dma_device_type": 1 00:08:22.988 }, 00:08:22.988 { 00:08:22.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.988 "dma_device_type": 2 00:08:22.989 } 00:08:22.989 ], 00:08:22.989 "driver_specific": {} 00:08:22.989 } 00:08:22.989 ] 00:08:22.989 03:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.989 03:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:22.989 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:22.989 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:22.989 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:22.989 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.989 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:22.989 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:22.989 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.989 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.989 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.989 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.989 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.989 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.989 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.989 03:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.989 03:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.989 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.248 03:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.248 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.248 "name": "Existed_Raid", 00:08:23.248 "uuid": "fbcaff5b-c23a-4fd1-8aa6-3092dd60d407", 00:08:23.248 "strip_size_kb": 64, 00:08:23.248 "state": "online", 00:08:23.248 "raid_level": "concat", 00:08:23.248 "superblock": false, 00:08:23.248 "num_base_bdevs": 2, 00:08:23.248 "num_base_bdevs_discovered": 2, 00:08:23.248 "num_base_bdevs_operational": 2, 00:08:23.248 "base_bdevs_list": [ 00:08:23.248 { 00:08:23.248 "name": "BaseBdev1", 00:08:23.248 "uuid": "a019f24b-4f08-4924-aabe-d0c64e0e0e4f", 00:08:23.248 "is_configured": true, 00:08:23.248 "data_offset": 0, 00:08:23.248 "data_size": 65536 00:08:23.248 }, 00:08:23.248 { 00:08:23.248 "name": "BaseBdev2", 00:08:23.248 "uuid": "4fd41417-8475-446d-81a5-a06fab345080", 00:08:23.248 "is_configured": true, 00:08:23.248 "data_offset": 0, 00:08:23.248 "data_size": 65536 00:08:23.248 } 00:08:23.248 ] 00:08:23.248 }' 00:08:23.248 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.248 03:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.508 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:23.508 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:23.508 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:23.508 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:23.508 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:23.508 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:23.508 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:23.508 03:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:23.508 03:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.508 03:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.508 [2024-11-21 03:17:10.993141] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:23.508 03:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.508 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:23.508 "name": "Existed_Raid", 00:08:23.508 "aliases": [ 00:08:23.508 "fbcaff5b-c23a-4fd1-8aa6-3092dd60d407" 00:08:23.508 ], 00:08:23.508 "product_name": "Raid Volume", 00:08:23.508 "block_size": 512, 00:08:23.508 "num_blocks": 131072, 00:08:23.508 "uuid": "fbcaff5b-c23a-4fd1-8aa6-3092dd60d407", 00:08:23.508 "assigned_rate_limits": { 00:08:23.508 "rw_ios_per_sec": 0, 00:08:23.508 "rw_mbytes_per_sec": 0, 00:08:23.508 "r_mbytes_per_sec": 0, 00:08:23.508 "w_mbytes_per_sec": 0 00:08:23.508 }, 00:08:23.508 "claimed": false, 00:08:23.508 "zoned": false, 00:08:23.508 "supported_io_types": { 00:08:23.508 "read": true, 00:08:23.508 "write": true, 00:08:23.508 "unmap": true, 00:08:23.508 "flush": true, 00:08:23.508 "reset": true, 00:08:23.508 "nvme_admin": false, 00:08:23.508 "nvme_io": false, 00:08:23.508 "nvme_io_md": false, 00:08:23.508 "write_zeroes": true, 00:08:23.508 "zcopy": false, 00:08:23.508 "get_zone_info": false, 00:08:23.508 "zone_management": false, 00:08:23.508 "zone_append": false, 00:08:23.508 "compare": false, 00:08:23.508 "compare_and_write": false, 00:08:23.508 "abort": false, 00:08:23.508 "seek_hole": false, 00:08:23.508 "seek_data": false, 00:08:23.508 "copy": false, 00:08:23.508 "nvme_iov_md": false 00:08:23.509 }, 00:08:23.509 "memory_domains": [ 00:08:23.509 { 00:08:23.509 "dma_device_id": "system", 00:08:23.509 "dma_device_type": 1 00:08:23.509 }, 00:08:23.509 { 00:08:23.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.509 "dma_device_type": 2 00:08:23.509 }, 00:08:23.509 { 00:08:23.509 "dma_device_id": "system", 00:08:23.509 "dma_device_type": 1 00:08:23.509 }, 00:08:23.509 { 00:08:23.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.509 "dma_device_type": 2 00:08:23.509 } 00:08:23.509 ], 00:08:23.509 "driver_specific": { 00:08:23.509 "raid": { 00:08:23.509 "uuid": "fbcaff5b-c23a-4fd1-8aa6-3092dd60d407", 00:08:23.509 "strip_size_kb": 64, 00:08:23.509 "state": "online", 00:08:23.509 "raid_level": "concat", 00:08:23.509 "superblock": false, 00:08:23.509 "num_base_bdevs": 2, 00:08:23.509 "num_base_bdevs_discovered": 2, 00:08:23.509 "num_base_bdevs_operational": 2, 00:08:23.509 "base_bdevs_list": [ 00:08:23.509 { 00:08:23.509 "name": "BaseBdev1", 00:08:23.509 "uuid": "a019f24b-4f08-4924-aabe-d0c64e0e0e4f", 00:08:23.509 "is_configured": true, 00:08:23.509 "data_offset": 0, 00:08:23.509 "data_size": 65536 00:08:23.509 }, 00:08:23.509 { 00:08:23.509 "name": "BaseBdev2", 00:08:23.509 "uuid": "4fd41417-8475-446d-81a5-a06fab345080", 00:08:23.509 "is_configured": true, 00:08:23.509 "data_offset": 0, 00:08:23.509 "data_size": 65536 00:08:23.509 } 00:08:23.509 ] 00:08:23.509 } 00:08:23.509 } 00:08:23.509 }' 00:08:23.509 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:23.509 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:23.509 BaseBdev2' 00:08:23.509 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.769 [2024-11-21 03:17:11.200950] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:23.769 [2024-11-21 03:17:11.201095] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:23.769 [2024-11-21 03:17:11.201185] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.769 "name": "Existed_Raid", 00:08:23.769 "uuid": "fbcaff5b-c23a-4fd1-8aa6-3092dd60d407", 00:08:23.769 "strip_size_kb": 64, 00:08:23.769 "state": "offline", 00:08:23.769 "raid_level": "concat", 00:08:23.769 "superblock": false, 00:08:23.769 "num_base_bdevs": 2, 00:08:23.769 "num_base_bdevs_discovered": 1, 00:08:23.769 "num_base_bdevs_operational": 1, 00:08:23.769 "base_bdevs_list": [ 00:08:23.769 { 00:08:23.769 "name": null, 00:08:23.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.769 "is_configured": false, 00:08:23.769 "data_offset": 0, 00:08:23.769 "data_size": 65536 00:08:23.769 }, 00:08:23.769 { 00:08:23.769 "name": "BaseBdev2", 00:08:23.769 "uuid": "4fd41417-8475-446d-81a5-a06fab345080", 00:08:23.769 "is_configured": true, 00:08:23.769 "data_offset": 0, 00:08:23.769 "data_size": 65536 00:08:23.769 } 00:08:23.769 ] 00:08:23.769 }' 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.769 03:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.339 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:24.339 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:24.339 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.339 03:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.339 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:24.339 03:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.339 03:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.339 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:24.339 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:24.339 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:24.339 03:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.339 03:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.339 [2024-11-21 03:17:11.700913] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:24.339 [2024-11-21 03:17:11.701092] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:08:24.339 03:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.339 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:24.339 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:24.339 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.339 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:24.339 03:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.339 03:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.339 03:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.339 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:24.339 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:24.339 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:24.339 03:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 75045 00:08:24.339 03:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 75045 ']' 00:08:24.339 03:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 75045 00:08:24.339 03:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:24.339 03:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:24.339 03:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75045 00:08:24.339 killing process with pid 75045 00:08:24.339 03:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:24.339 03:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:24.339 03:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75045' 00:08:24.339 03:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 75045 00:08:24.339 [2024-11-21 03:17:11.800315] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:24.339 03:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 75045 00:08:24.339 [2024-11-21 03:17:11.801416] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:24.598 00:08:24.598 real 0m3.919s 00:08:24.598 user 0m6.174s 00:08:24.598 sys 0m0.797s 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.598 ************************************ 00:08:24.598 END TEST raid_state_function_test 00:08:24.598 ************************************ 00:08:24.598 03:17:12 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:08:24.598 03:17:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:24.598 03:17:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.598 03:17:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:24.598 ************************************ 00:08:24.598 START TEST raid_state_function_test_sb 00:08:24.598 ************************************ 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75287 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75287' 00:08:24.598 Process raid pid: 75287 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75287 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75287 ']' 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:24.598 03:17:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.858 [2024-11-21 03:17:12.204545] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:08:24.858 [2024-11-21 03:17:12.204801] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.858 [2024-11-21 03:17:12.351324] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:24.858 [2024-11-21 03:17:12.388842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.858 [2024-11-21 03:17:12.419665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.148 [2024-11-21 03:17:12.463096] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:25.148 [2024-11-21 03:17:12.463137] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:25.718 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:25.718 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:25.718 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:25.718 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.718 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.718 [2024-11-21 03:17:13.058570] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:25.718 [2024-11-21 03:17:13.058681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:25.718 [2024-11-21 03:17:13.058714] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:25.718 [2024-11-21 03:17:13.058736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:25.718 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.718 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:25.718 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.718 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.718 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:25.718 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.718 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:25.718 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.718 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.718 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.718 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.718 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.718 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.718 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.718 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.718 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.718 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.718 "name": "Existed_Raid", 00:08:25.718 "uuid": "1e667b31-49bb-4ec9-85b2-177f01422b46", 00:08:25.718 "strip_size_kb": 64, 00:08:25.718 "state": "configuring", 00:08:25.718 "raid_level": "concat", 00:08:25.718 "superblock": true, 00:08:25.719 "num_base_bdevs": 2, 00:08:25.719 "num_base_bdevs_discovered": 0, 00:08:25.719 "num_base_bdevs_operational": 2, 00:08:25.719 "base_bdevs_list": [ 00:08:25.719 { 00:08:25.719 "name": "BaseBdev1", 00:08:25.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.719 "is_configured": false, 00:08:25.719 "data_offset": 0, 00:08:25.719 "data_size": 0 00:08:25.719 }, 00:08:25.719 { 00:08:25.719 "name": "BaseBdev2", 00:08:25.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.719 "is_configured": false, 00:08:25.719 "data_offset": 0, 00:08:25.719 "data_size": 0 00:08:25.719 } 00:08:25.719 ] 00:08:25.719 }' 00:08:25.719 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.719 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.979 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:25.979 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.979 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.979 [2024-11-21 03:17:13.498628] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:25.979 [2024-11-21 03:17:13.498761] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:08:25.979 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.979 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:25.979 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.979 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.979 [2024-11-21 03:17:13.510691] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:25.979 [2024-11-21 03:17:13.510828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:25.979 [2024-11-21 03:17:13.510866] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:25.979 [2024-11-21 03:17:13.510891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:25.979 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.979 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:25.979 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.979 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.979 [2024-11-21 03:17:13.532238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:25.979 BaseBdev1 00:08:25.979 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.979 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:25.979 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:25.979 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:25.979 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:25.979 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:25.979 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:25.979 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:25.979 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.979 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.240 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.240 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:26.240 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.240 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.240 [ 00:08:26.240 { 00:08:26.240 "name": "BaseBdev1", 00:08:26.240 "aliases": [ 00:08:26.240 "0a39a8a2-e24b-4d5d-a7a7-4afcdf7efe3b" 00:08:26.240 ], 00:08:26.240 "product_name": "Malloc disk", 00:08:26.240 "block_size": 512, 00:08:26.240 "num_blocks": 65536, 00:08:26.240 "uuid": "0a39a8a2-e24b-4d5d-a7a7-4afcdf7efe3b", 00:08:26.240 "assigned_rate_limits": { 00:08:26.240 "rw_ios_per_sec": 0, 00:08:26.240 "rw_mbytes_per_sec": 0, 00:08:26.240 "r_mbytes_per_sec": 0, 00:08:26.240 "w_mbytes_per_sec": 0 00:08:26.240 }, 00:08:26.240 "claimed": true, 00:08:26.240 "claim_type": "exclusive_write", 00:08:26.240 "zoned": false, 00:08:26.240 "supported_io_types": { 00:08:26.240 "read": true, 00:08:26.240 "write": true, 00:08:26.240 "unmap": true, 00:08:26.240 "flush": true, 00:08:26.240 "reset": true, 00:08:26.240 "nvme_admin": false, 00:08:26.240 "nvme_io": false, 00:08:26.240 "nvme_io_md": false, 00:08:26.240 "write_zeroes": true, 00:08:26.240 "zcopy": true, 00:08:26.240 "get_zone_info": false, 00:08:26.240 "zone_management": false, 00:08:26.240 "zone_append": false, 00:08:26.240 "compare": false, 00:08:26.240 "compare_and_write": false, 00:08:26.240 "abort": true, 00:08:26.240 "seek_hole": false, 00:08:26.240 "seek_data": false, 00:08:26.240 "copy": true, 00:08:26.240 "nvme_iov_md": false 00:08:26.240 }, 00:08:26.240 "memory_domains": [ 00:08:26.240 { 00:08:26.240 "dma_device_id": "system", 00:08:26.240 "dma_device_type": 1 00:08:26.240 }, 00:08:26.240 { 00:08:26.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.240 "dma_device_type": 2 00:08:26.240 } 00:08:26.240 ], 00:08:26.240 "driver_specific": {} 00:08:26.240 } 00:08:26.240 ] 00:08:26.240 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.240 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:26.240 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:26.240 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.240 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.240 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:26.240 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.240 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:26.240 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.240 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.240 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.240 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.240 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.240 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.240 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.240 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.240 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.240 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.240 "name": "Existed_Raid", 00:08:26.240 "uuid": "37f6e6d3-cd25-4952-bab7-320425c2e099", 00:08:26.240 "strip_size_kb": 64, 00:08:26.240 "state": "configuring", 00:08:26.240 "raid_level": "concat", 00:08:26.240 "superblock": true, 00:08:26.240 "num_base_bdevs": 2, 00:08:26.240 "num_base_bdevs_discovered": 1, 00:08:26.240 "num_base_bdevs_operational": 2, 00:08:26.240 "base_bdevs_list": [ 00:08:26.240 { 00:08:26.240 "name": "BaseBdev1", 00:08:26.240 "uuid": "0a39a8a2-e24b-4d5d-a7a7-4afcdf7efe3b", 00:08:26.240 "is_configured": true, 00:08:26.240 "data_offset": 2048, 00:08:26.240 "data_size": 63488 00:08:26.240 }, 00:08:26.240 { 00:08:26.240 "name": "BaseBdev2", 00:08:26.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.240 "is_configured": false, 00:08:26.240 "data_offset": 0, 00:08:26.240 "data_size": 0 00:08:26.240 } 00:08:26.240 ] 00:08:26.240 }' 00:08:26.240 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.240 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.500 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:26.500 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.500 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.500 [2024-11-21 03:17:13.984427] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:26.500 [2024-11-21 03:17:13.984556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:26.500 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.500 03:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:26.500 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.500 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.500 [2024-11-21 03:17:13.996484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:26.500 [2024-11-21 03:17:13.998652] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:26.500 [2024-11-21 03:17:13.998746] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:26.500 03:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.500 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:26.500 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:26.500 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:26.500 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.500 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.500 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:26.500 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.500 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:26.500 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.500 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.500 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.500 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.500 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.500 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.500 03:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.500 03:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.500 03:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.500 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.500 "name": "Existed_Raid", 00:08:26.500 "uuid": "40c254cd-20f9-4d42-9d1d-79fb832fe078", 00:08:26.500 "strip_size_kb": 64, 00:08:26.500 "state": "configuring", 00:08:26.500 "raid_level": "concat", 00:08:26.500 "superblock": true, 00:08:26.500 "num_base_bdevs": 2, 00:08:26.500 "num_base_bdevs_discovered": 1, 00:08:26.500 "num_base_bdevs_operational": 2, 00:08:26.500 "base_bdevs_list": [ 00:08:26.500 { 00:08:26.500 "name": "BaseBdev1", 00:08:26.500 "uuid": "0a39a8a2-e24b-4d5d-a7a7-4afcdf7efe3b", 00:08:26.500 "is_configured": true, 00:08:26.500 "data_offset": 2048, 00:08:26.500 "data_size": 63488 00:08:26.500 }, 00:08:26.500 { 00:08:26.500 "name": "BaseBdev2", 00:08:26.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.500 "is_configured": false, 00:08:26.500 "data_offset": 0, 00:08:26.501 "data_size": 0 00:08:26.501 } 00:08:26.501 ] 00:08:26.501 }' 00:08:26.501 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.501 03:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.071 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:27.071 03:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.071 03:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.071 [2024-11-21 03:17:14.451840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:27.071 BaseBdev2 00:08:27.071 [2024-11-21 03:17:14.452201] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:27.071 [2024-11-21 03:17:14.452225] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:27.071 [2024-11-21 03:17:14.452551] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:08:27.071 [2024-11-21 03:17:14.452726] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:27.071 [2024-11-21 03:17:14.452739] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:08:27.071 [2024-11-21 03:17:14.452876] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.071 03:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.071 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:27.071 03:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:27.071 03:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:27.071 03:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:27.071 03:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:27.071 03:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:27.071 03:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:27.071 03:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.071 03:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.071 03:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.071 03:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:27.071 03:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.071 03:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.071 [ 00:08:27.071 { 00:08:27.071 "name": "BaseBdev2", 00:08:27.071 "aliases": [ 00:08:27.071 "ea5842b6-520d-458c-b212-e64823692cc1" 00:08:27.071 ], 00:08:27.071 "product_name": "Malloc disk", 00:08:27.071 "block_size": 512, 00:08:27.071 "num_blocks": 65536, 00:08:27.071 "uuid": "ea5842b6-520d-458c-b212-e64823692cc1", 00:08:27.071 "assigned_rate_limits": { 00:08:27.071 "rw_ios_per_sec": 0, 00:08:27.071 "rw_mbytes_per_sec": 0, 00:08:27.071 "r_mbytes_per_sec": 0, 00:08:27.071 "w_mbytes_per_sec": 0 00:08:27.071 }, 00:08:27.071 "claimed": true, 00:08:27.071 "claim_type": "exclusive_write", 00:08:27.071 "zoned": false, 00:08:27.071 "supported_io_types": { 00:08:27.071 "read": true, 00:08:27.071 "write": true, 00:08:27.071 "unmap": true, 00:08:27.071 "flush": true, 00:08:27.071 "reset": true, 00:08:27.071 "nvme_admin": false, 00:08:27.071 "nvme_io": false, 00:08:27.071 "nvme_io_md": false, 00:08:27.071 "write_zeroes": true, 00:08:27.071 "zcopy": true, 00:08:27.071 "get_zone_info": false, 00:08:27.071 "zone_management": false, 00:08:27.071 "zone_append": false, 00:08:27.071 "compare": false, 00:08:27.071 "compare_and_write": false, 00:08:27.071 "abort": true, 00:08:27.071 "seek_hole": false, 00:08:27.071 "seek_data": false, 00:08:27.072 "copy": true, 00:08:27.072 "nvme_iov_md": false 00:08:27.072 }, 00:08:27.072 "memory_domains": [ 00:08:27.072 { 00:08:27.072 "dma_device_id": "system", 00:08:27.072 "dma_device_type": 1 00:08:27.072 }, 00:08:27.072 { 00:08:27.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.072 "dma_device_type": 2 00:08:27.072 } 00:08:27.072 ], 00:08:27.072 "driver_specific": {} 00:08:27.072 } 00:08:27.072 ] 00:08:27.072 03:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.072 03:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:27.072 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:27.072 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:27.072 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:27.072 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.072 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:27.072 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:27.072 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.072 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:27.072 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.072 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.072 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.072 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.072 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.072 03:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.072 03:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.072 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.072 03:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.072 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.072 "name": "Existed_Raid", 00:08:27.072 "uuid": "40c254cd-20f9-4d42-9d1d-79fb832fe078", 00:08:27.072 "strip_size_kb": 64, 00:08:27.072 "state": "online", 00:08:27.072 "raid_level": "concat", 00:08:27.072 "superblock": true, 00:08:27.072 "num_base_bdevs": 2, 00:08:27.072 "num_base_bdevs_discovered": 2, 00:08:27.072 "num_base_bdevs_operational": 2, 00:08:27.072 "base_bdevs_list": [ 00:08:27.072 { 00:08:27.072 "name": "BaseBdev1", 00:08:27.072 "uuid": "0a39a8a2-e24b-4d5d-a7a7-4afcdf7efe3b", 00:08:27.072 "is_configured": true, 00:08:27.072 "data_offset": 2048, 00:08:27.072 "data_size": 63488 00:08:27.072 }, 00:08:27.072 { 00:08:27.072 "name": "BaseBdev2", 00:08:27.072 "uuid": "ea5842b6-520d-458c-b212-e64823692cc1", 00:08:27.072 "is_configured": true, 00:08:27.072 "data_offset": 2048, 00:08:27.072 "data_size": 63488 00:08:27.072 } 00:08:27.072 ] 00:08:27.072 }' 00:08:27.072 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.072 03:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.332 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:27.332 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:27.332 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:27.332 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:27.332 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:27.332 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:27.332 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:27.332 03:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.332 03:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.332 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:27.332 [2024-11-21 03:17:14.892487] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:27.592 03:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.592 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:27.592 "name": "Existed_Raid", 00:08:27.592 "aliases": [ 00:08:27.593 "40c254cd-20f9-4d42-9d1d-79fb832fe078" 00:08:27.593 ], 00:08:27.593 "product_name": "Raid Volume", 00:08:27.593 "block_size": 512, 00:08:27.593 "num_blocks": 126976, 00:08:27.593 "uuid": "40c254cd-20f9-4d42-9d1d-79fb832fe078", 00:08:27.593 "assigned_rate_limits": { 00:08:27.593 "rw_ios_per_sec": 0, 00:08:27.593 "rw_mbytes_per_sec": 0, 00:08:27.593 "r_mbytes_per_sec": 0, 00:08:27.593 "w_mbytes_per_sec": 0 00:08:27.593 }, 00:08:27.593 "claimed": false, 00:08:27.593 "zoned": false, 00:08:27.593 "supported_io_types": { 00:08:27.593 "read": true, 00:08:27.593 "write": true, 00:08:27.593 "unmap": true, 00:08:27.593 "flush": true, 00:08:27.593 "reset": true, 00:08:27.593 "nvme_admin": false, 00:08:27.593 "nvme_io": false, 00:08:27.593 "nvme_io_md": false, 00:08:27.593 "write_zeroes": true, 00:08:27.593 "zcopy": false, 00:08:27.593 "get_zone_info": false, 00:08:27.593 "zone_management": false, 00:08:27.593 "zone_append": false, 00:08:27.593 "compare": false, 00:08:27.593 "compare_and_write": false, 00:08:27.593 "abort": false, 00:08:27.593 "seek_hole": false, 00:08:27.593 "seek_data": false, 00:08:27.593 "copy": false, 00:08:27.593 "nvme_iov_md": false 00:08:27.593 }, 00:08:27.593 "memory_domains": [ 00:08:27.593 { 00:08:27.593 "dma_device_id": "system", 00:08:27.593 "dma_device_type": 1 00:08:27.593 }, 00:08:27.593 { 00:08:27.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.593 "dma_device_type": 2 00:08:27.593 }, 00:08:27.593 { 00:08:27.593 "dma_device_id": "system", 00:08:27.593 "dma_device_type": 1 00:08:27.593 }, 00:08:27.593 { 00:08:27.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.593 "dma_device_type": 2 00:08:27.593 } 00:08:27.593 ], 00:08:27.593 "driver_specific": { 00:08:27.593 "raid": { 00:08:27.593 "uuid": "40c254cd-20f9-4d42-9d1d-79fb832fe078", 00:08:27.593 "strip_size_kb": 64, 00:08:27.593 "state": "online", 00:08:27.593 "raid_level": "concat", 00:08:27.593 "superblock": true, 00:08:27.593 "num_base_bdevs": 2, 00:08:27.593 "num_base_bdevs_discovered": 2, 00:08:27.593 "num_base_bdevs_operational": 2, 00:08:27.593 "base_bdevs_list": [ 00:08:27.593 { 00:08:27.593 "name": "BaseBdev1", 00:08:27.593 "uuid": "0a39a8a2-e24b-4d5d-a7a7-4afcdf7efe3b", 00:08:27.593 "is_configured": true, 00:08:27.593 "data_offset": 2048, 00:08:27.593 "data_size": 63488 00:08:27.593 }, 00:08:27.593 { 00:08:27.593 "name": "BaseBdev2", 00:08:27.593 "uuid": "ea5842b6-520d-458c-b212-e64823692cc1", 00:08:27.593 "is_configured": true, 00:08:27.593 "data_offset": 2048, 00:08:27.593 "data_size": 63488 00:08:27.593 } 00:08:27.593 ] 00:08:27.593 } 00:08:27.593 } 00:08:27.593 }' 00:08:27.593 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:27.593 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:27.593 BaseBdev2' 00:08:27.593 03:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.593 [2024-11-21 03:17:15.112265] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:27.593 [2024-11-21 03:17:15.112389] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:27.593 [2024-11-21 03:17:15.112494] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.593 03:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.854 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.854 "name": "Existed_Raid", 00:08:27.854 "uuid": "40c254cd-20f9-4d42-9d1d-79fb832fe078", 00:08:27.854 "strip_size_kb": 64, 00:08:27.854 "state": "offline", 00:08:27.854 "raid_level": "concat", 00:08:27.854 "superblock": true, 00:08:27.854 "num_base_bdevs": 2, 00:08:27.854 "num_base_bdevs_discovered": 1, 00:08:27.854 "num_base_bdevs_operational": 1, 00:08:27.854 "base_bdevs_list": [ 00:08:27.854 { 00:08:27.854 "name": null, 00:08:27.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.854 "is_configured": false, 00:08:27.854 "data_offset": 0, 00:08:27.854 "data_size": 63488 00:08:27.854 }, 00:08:27.854 { 00:08:27.854 "name": "BaseBdev2", 00:08:27.854 "uuid": "ea5842b6-520d-458c-b212-e64823692cc1", 00:08:27.854 "is_configured": true, 00:08:27.854 "data_offset": 2048, 00:08:27.854 "data_size": 63488 00:08:27.854 } 00:08:27.854 ] 00:08:27.854 }' 00:08:27.854 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.854 03:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.113 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:28.113 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:28.113 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.113 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:28.113 03:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.113 03:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.113 03:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.113 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:28.113 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:28.113 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:28.113 03:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.113 03:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.113 [2024-11-21 03:17:15.564675] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:28.113 [2024-11-21 03:17:15.564859] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:08:28.113 03:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.113 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:28.113 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:28.113 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.113 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:28.113 03:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.113 03:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.113 03:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.113 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:28.113 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:28.113 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:28.113 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75287 00:08:28.113 03:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75287 ']' 00:08:28.113 03:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 75287 00:08:28.113 03:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:28.113 03:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:28.113 03:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75287 00:08:28.113 killing process with pid 75287 00:08:28.113 03:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:28.113 03:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:28.113 03:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75287' 00:08:28.113 03:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 75287 00:08:28.113 [2024-11-21 03:17:15.663093] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:28.113 03:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 75287 00:08:28.113 [2024-11-21 03:17:15.664230] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:28.372 03:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:28.372 00:08:28.372 real 0m3.783s 00:08:28.372 user 0m5.875s 00:08:28.372 sys 0m0.795s 00:08:28.372 ************************************ 00:08:28.372 END TEST raid_state_function_test_sb 00:08:28.372 ************************************ 00:08:28.372 03:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.372 03:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.372 03:17:15 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:08:28.372 03:17:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:28.372 03:17:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.372 03:17:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:28.631 ************************************ 00:08:28.631 START TEST raid_superblock_test 00:08:28.631 ************************************ 00:08:28.631 03:17:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:08:28.631 03:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:28.631 03:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:28.631 03:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:28.631 03:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:28.631 03:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:28.631 03:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:28.631 03:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:28.631 03:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:28.631 03:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:28.631 03:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:28.631 03:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:28.631 03:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:28.631 03:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:28.631 03:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:28.631 03:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:28.632 03:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:28.632 03:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:28.632 03:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=75523 00:08:28.632 03:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 75523 00:08:28.632 03:17:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 75523 ']' 00:08:28.632 03:17:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.632 03:17:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.632 03:17:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.632 03:17:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.632 03:17:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.632 [2024-11-21 03:17:16.039845] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:08:28.632 [2024-11-21 03:17:16.040202] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75523 ] 00:08:28.891 [2024-11-21 03:17:16.202684] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:28.892 [2024-11-21 03:17:16.242826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.892 [2024-11-21 03:17:16.273799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.892 [2024-11-21 03:17:16.317931] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:28.892 [2024-11-21 03:17:16.317974] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.459 03:17:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.459 03:17:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:29.459 03:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:29.459 03:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:29.459 03:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:29.459 03:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:29.459 03:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:29.459 03:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:29.459 03:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:29.459 03:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:29.459 03:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:29.459 03:17:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.459 03:17:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.459 malloc1 00:08:29.459 03:17:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.459 03:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:29.459 03:17:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.459 03:17:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.459 [2024-11-21 03:17:16.982847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:29.459 [2024-11-21 03:17:16.983058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.459 [2024-11-21 03:17:16.983129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:29.459 [2024-11-21 03:17:16.983180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.459 [2024-11-21 03:17:16.985835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.459 [2024-11-21 03:17:16.985967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:29.459 pt1 00:08:29.459 03:17:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.459 03:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:29.459 03:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:29.459 03:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:29.459 03:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:29.459 03:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:29.459 03:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:29.459 03:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:29.459 03:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:29.459 03:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:29.459 03:17:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.459 03:17:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.459 malloc2 00:08:29.459 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.459 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:29.459 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.459 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.459 [2024-11-21 03:17:17.012473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:29.459 [2024-11-21 03:17:17.012619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.459 [2024-11-21 03:17:17.012673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:29.459 [2024-11-21 03:17:17.012707] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.459 [2024-11-21 03:17:17.015200] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.459 [2024-11-21 03:17:17.015300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:29.459 pt2 00:08:29.459 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.459 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:29.459 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:29.459 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:29.459 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.459 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.750 [2024-11-21 03:17:17.024572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:29.750 [2024-11-21 03:17:17.026880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:29.750 [2024-11-21 03:17:17.027168] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:08:29.750 [2024-11-21 03:17:17.027226] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:29.750 [2024-11-21 03:17:17.027583] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:08:29.750 [2024-11-21 03:17:17.027814] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:08:29.750 [2024-11-21 03:17:17.027869] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:08:29.750 [2024-11-21 03:17:17.028122] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.750 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.750 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:29.750 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:29.750 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.750 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:29.750 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.750 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:29.750 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.750 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.750 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.750 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.750 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.750 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:29.750 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.750 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.750 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.750 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.750 "name": "raid_bdev1", 00:08:29.750 "uuid": "f664655f-e785-4bae-8a78-8d6917775aa3", 00:08:29.750 "strip_size_kb": 64, 00:08:29.750 "state": "online", 00:08:29.750 "raid_level": "concat", 00:08:29.750 "superblock": true, 00:08:29.750 "num_base_bdevs": 2, 00:08:29.750 "num_base_bdevs_discovered": 2, 00:08:29.750 "num_base_bdevs_operational": 2, 00:08:29.750 "base_bdevs_list": [ 00:08:29.750 { 00:08:29.750 "name": "pt1", 00:08:29.750 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:29.750 "is_configured": true, 00:08:29.750 "data_offset": 2048, 00:08:29.750 "data_size": 63488 00:08:29.750 }, 00:08:29.750 { 00:08:29.750 "name": "pt2", 00:08:29.750 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:29.750 "is_configured": true, 00:08:29.750 "data_offset": 2048, 00:08:29.750 "data_size": 63488 00:08:29.750 } 00:08:29.750 ] 00:08:29.750 }' 00:08:29.750 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.750 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.008 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:30.008 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:30.008 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:30.008 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:30.008 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:30.008 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:30.008 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:30.008 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:30.008 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.008 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.008 [2024-11-21 03:17:17.460986] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:30.008 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.008 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:30.008 "name": "raid_bdev1", 00:08:30.008 "aliases": [ 00:08:30.009 "f664655f-e785-4bae-8a78-8d6917775aa3" 00:08:30.009 ], 00:08:30.009 "product_name": "Raid Volume", 00:08:30.009 "block_size": 512, 00:08:30.009 "num_blocks": 126976, 00:08:30.009 "uuid": "f664655f-e785-4bae-8a78-8d6917775aa3", 00:08:30.009 "assigned_rate_limits": { 00:08:30.009 "rw_ios_per_sec": 0, 00:08:30.009 "rw_mbytes_per_sec": 0, 00:08:30.009 "r_mbytes_per_sec": 0, 00:08:30.009 "w_mbytes_per_sec": 0 00:08:30.009 }, 00:08:30.009 "claimed": false, 00:08:30.009 "zoned": false, 00:08:30.009 "supported_io_types": { 00:08:30.009 "read": true, 00:08:30.009 "write": true, 00:08:30.009 "unmap": true, 00:08:30.009 "flush": true, 00:08:30.009 "reset": true, 00:08:30.009 "nvme_admin": false, 00:08:30.009 "nvme_io": false, 00:08:30.009 "nvme_io_md": false, 00:08:30.009 "write_zeroes": true, 00:08:30.009 "zcopy": false, 00:08:30.009 "get_zone_info": false, 00:08:30.009 "zone_management": false, 00:08:30.009 "zone_append": false, 00:08:30.009 "compare": false, 00:08:30.009 "compare_and_write": false, 00:08:30.009 "abort": false, 00:08:30.009 "seek_hole": false, 00:08:30.009 "seek_data": false, 00:08:30.009 "copy": false, 00:08:30.009 "nvme_iov_md": false 00:08:30.009 }, 00:08:30.009 "memory_domains": [ 00:08:30.009 { 00:08:30.009 "dma_device_id": "system", 00:08:30.009 "dma_device_type": 1 00:08:30.009 }, 00:08:30.009 { 00:08:30.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.009 "dma_device_type": 2 00:08:30.009 }, 00:08:30.009 { 00:08:30.009 "dma_device_id": "system", 00:08:30.009 "dma_device_type": 1 00:08:30.009 }, 00:08:30.009 { 00:08:30.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.009 "dma_device_type": 2 00:08:30.009 } 00:08:30.009 ], 00:08:30.009 "driver_specific": { 00:08:30.009 "raid": { 00:08:30.009 "uuid": "f664655f-e785-4bae-8a78-8d6917775aa3", 00:08:30.009 "strip_size_kb": 64, 00:08:30.009 "state": "online", 00:08:30.009 "raid_level": "concat", 00:08:30.009 "superblock": true, 00:08:30.009 "num_base_bdevs": 2, 00:08:30.009 "num_base_bdevs_discovered": 2, 00:08:30.009 "num_base_bdevs_operational": 2, 00:08:30.009 "base_bdevs_list": [ 00:08:30.009 { 00:08:30.009 "name": "pt1", 00:08:30.009 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:30.009 "is_configured": true, 00:08:30.009 "data_offset": 2048, 00:08:30.009 "data_size": 63488 00:08:30.009 }, 00:08:30.009 { 00:08:30.009 "name": "pt2", 00:08:30.009 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:30.009 "is_configured": true, 00:08:30.009 "data_offset": 2048, 00:08:30.009 "data_size": 63488 00:08:30.009 } 00:08:30.009 ] 00:08:30.009 } 00:08:30.009 } 00:08:30.009 }' 00:08:30.009 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:30.009 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:30.009 pt2' 00:08:30.009 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.009 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:30.009 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.009 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:30.009 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.009 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.009 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.268 [2024-11-21 03:17:17.677017] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f664655f-e785-4bae-8a78-8d6917775aa3 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f664655f-e785-4bae-8a78-8d6917775aa3 ']' 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.268 [2024-11-21 03:17:17.724713] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:30.268 [2024-11-21 03:17:17.724815] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:30.268 [2024-11-21 03:17:17.724966] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:30.268 [2024-11-21 03:17:17.725077] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:30.268 [2024-11-21 03:17:17.725126] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.268 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.527 [2024-11-21 03:17:17.864815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:30.527 [2024-11-21 03:17:17.867110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:30.527 [2024-11-21 03:17:17.867194] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:30.527 [2024-11-21 03:17:17.867274] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:30.527 [2024-11-21 03:17:17.867292] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:30.527 [2024-11-21 03:17:17.867303] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:08:30.527 request: 00:08:30.527 { 00:08:30.527 "name": "raid_bdev1", 00:08:30.527 "raid_level": "concat", 00:08:30.527 "base_bdevs": [ 00:08:30.527 "malloc1", 00:08:30.527 "malloc2" 00:08:30.527 ], 00:08:30.527 "strip_size_kb": 64, 00:08:30.527 "superblock": false, 00:08:30.527 "method": "bdev_raid_create", 00:08:30.527 "req_id": 1 00:08:30.527 } 00:08:30.527 Got JSON-RPC error response 00:08:30.527 response: 00:08:30.527 { 00:08:30.527 "code": -17, 00:08:30.527 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:30.527 } 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.527 [2024-11-21 03:17:17.932793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:30.527 [2024-11-21 03:17:17.932948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.527 [2024-11-21 03:17:17.932989] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:30.527 [2024-11-21 03:17:17.933048] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.527 [2024-11-21 03:17:17.935533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.527 [2024-11-21 03:17:17.935646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:30.527 [2024-11-21 03:17:17.935798] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:30.527 [2024-11-21 03:17:17.935888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:30.527 pt1 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.527 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.527 "name": "raid_bdev1", 00:08:30.527 "uuid": "f664655f-e785-4bae-8a78-8d6917775aa3", 00:08:30.527 "strip_size_kb": 64, 00:08:30.527 "state": "configuring", 00:08:30.527 "raid_level": "concat", 00:08:30.527 "superblock": true, 00:08:30.528 "num_base_bdevs": 2, 00:08:30.528 "num_base_bdevs_discovered": 1, 00:08:30.528 "num_base_bdevs_operational": 2, 00:08:30.528 "base_bdevs_list": [ 00:08:30.528 { 00:08:30.528 "name": "pt1", 00:08:30.528 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:30.528 "is_configured": true, 00:08:30.528 "data_offset": 2048, 00:08:30.528 "data_size": 63488 00:08:30.528 }, 00:08:30.528 { 00:08:30.528 "name": null, 00:08:30.528 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:30.528 "is_configured": false, 00:08:30.528 "data_offset": 2048, 00:08:30.528 "data_size": 63488 00:08:30.528 } 00:08:30.528 ] 00:08:30.528 }' 00:08:30.528 03:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.528 03:17:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.101 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:31.101 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:31.101 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:31.101 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:31.101 03:17:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.101 03:17:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.101 [2024-11-21 03:17:18.372920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:31.101 [2024-11-21 03:17:18.373095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.101 [2024-11-21 03:17:18.373143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:31.101 [2024-11-21 03:17:18.373180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.101 [2024-11-21 03:17:18.373658] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.101 [2024-11-21 03:17:18.373731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:31.101 [2024-11-21 03:17:18.373851] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:31.101 [2024-11-21 03:17:18.373908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:31.101 [2024-11-21 03:17:18.374049] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:31.101 [2024-11-21 03:17:18.374094] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:31.101 [2024-11-21 03:17:18.374376] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:31.101 [2024-11-21 03:17:18.374547] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:31.101 [2024-11-21 03:17:18.374591] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:08:31.101 [2024-11-21 03:17:18.374755] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:31.101 pt2 00:08:31.101 03:17:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.101 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:31.101 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:31.101 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:31.101 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:31.101 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:31.101 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:31.101 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.101 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:31.101 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.101 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.101 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.101 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.101 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.101 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:31.101 03:17:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.101 03:17:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.101 03:17:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.101 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.101 "name": "raid_bdev1", 00:08:31.101 "uuid": "f664655f-e785-4bae-8a78-8d6917775aa3", 00:08:31.101 "strip_size_kb": 64, 00:08:31.101 "state": "online", 00:08:31.101 "raid_level": "concat", 00:08:31.101 "superblock": true, 00:08:31.101 "num_base_bdevs": 2, 00:08:31.101 "num_base_bdevs_discovered": 2, 00:08:31.101 "num_base_bdevs_operational": 2, 00:08:31.101 "base_bdevs_list": [ 00:08:31.101 { 00:08:31.101 "name": "pt1", 00:08:31.101 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:31.101 "is_configured": true, 00:08:31.101 "data_offset": 2048, 00:08:31.101 "data_size": 63488 00:08:31.101 }, 00:08:31.101 { 00:08:31.101 "name": "pt2", 00:08:31.101 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:31.101 "is_configured": true, 00:08:31.101 "data_offset": 2048, 00:08:31.101 "data_size": 63488 00:08:31.102 } 00:08:31.102 ] 00:08:31.102 }' 00:08:31.102 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.102 03:17:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.361 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:31.361 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:31.361 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:31.361 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:31.361 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:31.361 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:31.361 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:31.361 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:31.361 03:17:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.361 03:17:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.361 [2024-11-21 03:17:18.841375] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:31.361 03:17:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.361 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:31.361 "name": "raid_bdev1", 00:08:31.361 "aliases": [ 00:08:31.361 "f664655f-e785-4bae-8a78-8d6917775aa3" 00:08:31.361 ], 00:08:31.361 "product_name": "Raid Volume", 00:08:31.361 "block_size": 512, 00:08:31.361 "num_blocks": 126976, 00:08:31.361 "uuid": "f664655f-e785-4bae-8a78-8d6917775aa3", 00:08:31.361 "assigned_rate_limits": { 00:08:31.361 "rw_ios_per_sec": 0, 00:08:31.361 "rw_mbytes_per_sec": 0, 00:08:31.361 "r_mbytes_per_sec": 0, 00:08:31.361 "w_mbytes_per_sec": 0 00:08:31.361 }, 00:08:31.361 "claimed": false, 00:08:31.361 "zoned": false, 00:08:31.361 "supported_io_types": { 00:08:31.361 "read": true, 00:08:31.361 "write": true, 00:08:31.361 "unmap": true, 00:08:31.361 "flush": true, 00:08:31.361 "reset": true, 00:08:31.361 "nvme_admin": false, 00:08:31.361 "nvme_io": false, 00:08:31.361 "nvme_io_md": false, 00:08:31.361 "write_zeroes": true, 00:08:31.361 "zcopy": false, 00:08:31.361 "get_zone_info": false, 00:08:31.361 "zone_management": false, 00:08:31.361 "zone_append": false, 00:08:31.361 "compare": false, 00:08:31.361 "compare_and_write": false, 00:08:31.361 "abort": false, 00:08:31.361 "seek_hole": false, 00:08:31.361 "seek_data": false, 00:08:31.361 "copy": false, 00:08:31.361 "nvme_iov_md": false 00:08:31.361 }, 00:08:31.361 "memory_domains": [ 00:08:31.361 { 00:08:31.361 "dma_device_id": "system", 00:08:31.361 "dma_device_type": 1 00:08:31.361 }, 00:08:31.361 { 00:08:31.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.361 "dma_device_type": 2 00:08:31.361 }, 00:08:31.361 { 00:08:31.361 "dma_device_id": "system", 00:08:31.361 "dma_device_type": 1 00:08:31.361 }, 00:08:31.361 { 00:08:31.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.361 "dma_device_type": 2 00:08:31.361 } 00:08:31.361 ], 00:08:31.361 "driver_specific": { 00:08:31.361 "raid": { 00:08:31.361 "uuid": "f664655f-e785-4bae-8a78-8d6917775aa3", 00:08:31.361 "strip_size_kb": 64, 00:08:31.361 "state": "online", 00:08:31.361 "raid_level": "concat", 00:08:31.361 "superblock": true, 00:08:31.361 "num_base_bdevs": 2, 00:08:31.361 "num_base_bdevs_discovered": 2, 00:08:31.361 "num_base_bdevs_operational": 2, 00:08:31.361 "base_bdevs_list": [ 00:08:31.361 { 00:08:31.361 "name": "pt1", 00:08:31.361 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:31.361 "is_configured": true, 00:08:31.361 "data_offset": 2048, 00:08:31.361 "data_size": 63488 00:08:31.361 }, 00:08:31.361 { 00:08:31.361 "name": "pt2", 00:08:31.361 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:31.361 "is_configured": true, 00:08:31.361 "data_offset": 2048, 00:08:31.361 "data_size": 63488 00:08:31.361 } 00:08:31.361 ] 00:08:31.361 } 00:08:31.361 } 00:08:31.361 }' 00:08:31.361 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:31.361 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:31.361 pt2' 00:08:31.361 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.621 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:31.621 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.621 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:31.621 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.621 03:17:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.621 03:17:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.621 03:17:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.621 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.621 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.621 03:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.621 03:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:31.621 03:17:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.621 03:17:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.621 03:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.621 03:17:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.621 03:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.621 03:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.621 03:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:31.621 03:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:31.621 03:17:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.621 03:17:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.621 [2024-11-21 03:17:19.065443] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:31.621 03:17:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.621 03:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f664655f-e785-4bae-8a78-8d6917775aa3 '!=' f664655f-e785-4bae-8a78-8d6917775aa3 ']' 00:08:31.621 03:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:31.621 03:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:31.621 03:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:31.621 03:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 75523 00:08:31.621 03:17:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 75523 ']' 00:08:31.621 03:17:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 75523 00:08:31.621 03:17:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:31.621 03:17:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:31.621 03:17:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75523 00:08:31.621 killing process with pid 75523 00:08:31.621 03:17:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:31.621 03:17:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:31.621 03:17:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75523' 00:08:31.621 03:17:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 75523 00:08:31.621 [2024-11-21 03:17:19.153411] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:31.621 [2024-11-21 03:17:19.153528] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:31.621 03:17:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 75523 00:08:31.621 [2024-11-21 03:17:19.153585] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:31.621 [2024-11-21 03:17:19.153598] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:08:31.621 [2024-11-21 03:17:19.177777] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:31.880 03:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:31.880 00:08:31.880 real 0m3.459s 00:08:31.880 user 0m5.332s 00:08:31.880 sys 0m0.803s 00:08:31.880 03:17:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.880 03:17:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.880 ************************************ 00:08:31.880 END TEST raid_superblock_test 00:08:31.880 ************************************ 00:08:32.138 03:17:19 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:32.138 03:17:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:32.138 03:17:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.138 03:17:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:32.138 ************************************ 00:08:32.138 START TEST raid_read_error_test 00:08:32.138 ************************************ 00:08:32.138 03:17:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:08:32.138 03:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:32.138 03:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:32.138 03:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:32.138 03:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:32.138 03:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:32.138 03:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:32.138 03:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:32.138 03:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:32.138 03:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:32.138 03:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:32.138 03:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:32.138 03:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:32.138 03:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:32.138 03:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:32.138 03:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:32.138 03:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:32.138 03:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:32.138 03:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:32.138 03:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:32.138 03:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:32.138 03:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:32.138 03:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:32.138 03:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.O3zisx2aFv 00:08:32.138 03:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75723 00:08:32.138 03:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75723 00:08:32.138 03:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:32.138 03:17:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75723 ']' 00:08:32.138 03:17:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.138 03:17:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:32.138 03:17:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.138 03:17:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:32.138 03:17:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.138 [2024-11-21 03:17:19.580186] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:08:32.138 [2024-11-21 03:17:19.580432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75723 ] 00:08:32.397 [2024-11-21 03:17:19.718475] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:32.397 [2024-11-21 03:17:19.755635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.398 [2024-11-21 03:17:19.786432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.398 [2024-11-21 03:17:19.830779] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.398 [2024-11-21 03:17:19.830827] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.967 BaseBdev1_malloc 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.967 true 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.967 [2024-11-21 03:17:20.464368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:32.967 [2024-11-21 03:17:20.464457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.967 [2024-11-21 03:17:20.464482] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:32.967 [2024-11-21 03:17:20.464498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.967 [2024-11-21 03:17:20.467102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.967 [2024-11-21 03:17:20.467153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:32.967 BaseBdev1 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.967 BaseBdev2_malloc 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.967 true 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.967 [2024-11-21 03:17:20.505732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:32.967 [2024-11-21 03:17:20.505816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.967 [2024-11-21 03:17:20.505839] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:32.967 [2024-11-21 03:17:20.505850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.967 [2024-11-21 03:17:20.508357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.967 [2024-11-21 03:17:20.508435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:32.967 BaseBdev2 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.967 [2024-11-21 03:17:20.517761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:32.967 [2024-11-21 03:17:20.520079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:32.967 [2024-11-21 03:17:20.520304] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:32.967 [2024-11-21 03:17:20.520321] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:32.967 [2024-11-21 03:17:20.520633] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:08:32.967 [2024-11-21 03:17:20.520808] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:32.967 [2024-11-21 03:17:20.520825] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:08:32.967 [2024-11-21 03:17:20.520997] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.967 03:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.228 03:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.228 03:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:33.228 03:17:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.228 03:17:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.228 03:17:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.228 03:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.228 "name": "raid_bdev1", 00:08:33.228 "uuid": "4684d00e-9280-4d31-83f8-077d7e04d7b2", 00:08:33.228 "strip_size_kb": 64, 00:08:33.228 "state": "online", 00:08:33.228 "raid_level": "concat", 00:08:33.228 "superblock": true, 00:08:33.228 "num_base_bdevs": 2, 00:08:33.228 "num_base_bdevs_discovered": 2, 00:08:33.228 "num_base_bdevs_operational": 2, 00:08:33.228 "base_bdevs_list": [ 00:08:33.228 { 00:08:33.228 "name": "BaseBdev1", 00:08:33.228 "uuid": "ae4d48ed-9aa0-54c7-9940-a84821eefc28", 00:08:33.228 "is_configured": true, 00:08:33.228 "data_offset": 2048, 00:08:33.228 "data_size": 63488 00:08:33.228 }, 00:08:33.228 { 00:08:33.228 "name": "BaseBdev2", 00:08:33.228 "uuid": "949a246e-6741-5edd-a8d2-1829b438b1f0", 00:08:33.228 "is_configured": true, 00:08:33.228 "data_offset": 2048, 00:08:33.228 "data_size": 63488 00:08:33.228 } 00:08:33.228 ] 00:08:33.228 }' 00:08:33.228 03:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.228 03:17:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.487 03:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:33.487 03:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:33.487 [2024-11-21 03:17:21.026295] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:08:34.429 03:17:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:34.429 03:17:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.429 03:17:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.429 03:17:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.429 03:17:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:34.429 03:17:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:34.429 03:17:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:34.429 03:17:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:34.429 03:17:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:34.429 03:17:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:34.429 03:17:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:34.429 03:17:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.430 03:17:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:34.430 03:17:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.430 03:17:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.430 03:17:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.430 03:17:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.430 03:17:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.430 03:17:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:34.430 03:17:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.430 03:17:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.430 03:17:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.690 03:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.690 "name": "raid_bdev1", 00:08:34.690 "uuid": "4684d00e-9280-4d31-83f8-077d7e04d7b2", 00:08:34.690 "strip_size_kb": 64, 00:08:34.690 "state": "online", 00:08:34.690 "raid_level": "concat", 00:08:34.690 "superblock": true, 00:08:34.690 "num_base_bdevs": 2, 00:08:34.690 "num_base_bdevs_discovered": 2, 00:08:34.690 "num_base_bdevs_operational": 2, 00:08:34.690 "base_bdevs_list": [ 00:08:34.690 { 00:08:34.690 "name": "BaseBdev1", 00:08:34.690 "uuid": "ae4d48ed-9aa0-54c7-9940-a84821eefc28", 00:08:34.690 "is_configured": true, 00:08:34.690 "data_offset": 2048, 00:08:34.690 "data_size": 63488 00:08:34.690 }, 00:08:34.690 { 00:08:34.690 "name": "BaseBdev2", 00:08:34.690 "uuid": "949a246e-6741-5edd-a8d2-1829b438b1f0", 00:08:34.690 "is_configured": true, 00:08:34.690 "data_offset": 2048, 00:08:34.690 "data_size": 63488 00:08:34.690 } 00:08:34.690 ] 00:08:34.690 }' 00:08:34.690 03:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.690 03:17:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.950 03:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:34.950 03:17:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.950 03:17:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.950 [2024-11-21 03:17:22.377484] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:34.950 [2024-11-21 03:17:22.377620] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:34.950 [2024-11-21 03:17:22.380506] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:34.950 [2024-11-21 03:17:22.380626] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:34.950 [2024-11-21 03:17:22.380683] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:34.950 [2024-11-21 03:17:22.380745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:08:34.950 03:17:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.950 { 00:08:34.950 "results": [ 00:08:34.950 { 00:08:34.950 "job": "raid_bdev1", 00:08:34.950 "core_mask": "0x1", 00:08:34.950 "workload": "randrw", 00:08:34.950 "percentage": 50, 00:08:34.950 "status": "finished", 00:08:34.950 "queue_depth": 1, 00:08:34.950 "io_size": 131072, 00:08:34.950 "runtime": 1.349191, 00:08:34.950 "iops": 15030.488640970774, 00:08:34.950 "mibps": 1878.8110801213468, 00:08:34.950 "io_failed": 1, 00:08:34.950 "io_timeout": 0, 00:08:34.950 "avg_latency_us": 92.31590024155847, 00:08:34.950 "min_latency_us": 27.89157151573259, 00:08:34.950 "max_latency_us": 1635.1154885383073 00:08:34.950 } 00:08:34.950 ], 00:08:34.950 "core_count": 1 00:08:34.950 } 00:08:34.950 03:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75723 00:08:34.950 03:17:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75723 ']' 00:08:34.950 03:17:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75723 00:08:34.950 03:17:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:34.950 03:17:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:34.950 03:17:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75723 00:08:34.950 03:17:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:34.950 03:17:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:34.950 03:17:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75723' 00:08:34.950 killing process with pid 75723 00:08:34.950 03:17:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75723 00:08:34.950 [2024-11-21 03:17:22.429963] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:34.951 03:17:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75723 00:08:34.951 [2024-11-21 03:17:22.446502] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:35.211 03:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.O3zisx2aFv 00:08:35.211 03:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:35.211 03:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:35.211 03:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:35.211 03:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:35.211 03:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:35.211 03:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:35.211 03:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:35.211 ************************************ 00:08:35.211 END TEST raid_read_error_test 00:08:35.211 ************************************ 00:08:35.211 00:08:35.211 real 0m3.193s 00:08:35.211 user 0m4.016s 00:08:35.211 sys 0m0.557s 00:08:35.211 03:17:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.211 03:17:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.211 03:17:22 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:35.211 03:17:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:35.211 03:17:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.211 03:17:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:35.211 ************************************ 00:08:35.211 START TEST raid_write_error_test 00:08:35.211 ************************************ 00:08:35.211 03:17:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:08:35.211 03:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:35.211 03:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:35.211 03:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:35.211 03:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:35.211 03:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.211 03:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:35.211 03:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:35.211 03:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.211 03:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:35.211 03:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:35.211 03:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.211 03:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:35.211 03:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:35.211 03:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:35.211 03:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:35.211 03:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:35.211 03:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:35.211 03:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:35.211 03:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:35.211 03:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:35.211 03:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:35.211 03:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:35.211 03:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.my9pXribL8 00:08:35.211 03:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75852 00:08:35.211 03:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75852 00:08:35.211 03:17:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75852 ']' 00:08:35.211 03:17:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:35.211 03:17:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.211 03:17:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:35.211 03:17:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.211 03:17:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:35.211 03:17:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.472 [2024-11-21 03:17:22.847377] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:08:35.472 [2024-11-21 03:17:22.847607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75852 ] 00:08:35.472 [2024-11-21 03:17:22.987295] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:35.472 [2024-11-21 03:17:23.023048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.732 [2024-11-21 03:17:23.054128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.732 [2024-11-21 03:17:23.098141] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:35.732 [2024-11-21 03:17:23.098183] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.303 BaseBdev1_malloc 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.303 true 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.303 [2024-11-21 03:17:23.742383] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:36.303 [2024-11-21 03:17:23.742464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.303 [2024-11-21 03:17:23.742488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:36.303 [2024-11-21 03:17:23.742502] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.303 [2024-11-21 03:17:23.744894] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.303 [2024-11-21 03:17:23.744946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:36.303 BaseBdev1 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.303 BaseBdev2_malloc 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.303 true 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.303 [2024-11-21 03:17:23.783500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:36.303 [2024-11-21 03:17:23.783573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.303 [2024-11-21 03:17:23.783591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:36.303 [2024-11-21 03:17:23.783601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.303 [2024-11-21 03:17:23.786014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.303 [2024-11-21 03:17:23.786073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:36.303 BaseBdev2 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.303 [2024-11-21 03:17:23.795535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:36.303 [2024-11-21 03:17:23.797655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:36.303 [2024-11-21 03:17:23.797930] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:36.303 [2024-11-21 03:17:23.797969] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:36.303 [2024-11-21 03:17:23.798254] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:08:36.303 [2024-11-21 03:17:23.798424] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:36.303 [2024-11-21 03:17:23.798436] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:08:36.303 [2024-11-21 03:17:23.798602] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.303 03:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.303 "name": "raid_bdev1", 00:08:36.303 "uuid": "81bb37b8-6e16-4de6-a11d-c89a9ceec28f", 00:08:36.303 "strip_size_kb": 64, 00:08:36.303 "state": "online", 00:08:36.303 "raid_level": "concat", 00:08:36.303 "superblock": true, 00:08:36.303 "num_base_bdevs": 2, 00:08:36.303 "num_base_bdevs_discovered": 2, 00:08:36.303 "num_base_bdevs_operational": 2, 00:08:36.303 "base_bdevs_list": [ 00:08:36.303 { 00:08:36.303 "name": "BaseBdev1", 00:08:36.303 "uuid": "b8ce5be3-0961-5c75-bb4d-6b2b6411c623", 00:08:36.303 "is_configured": true, 00:08:36.303 "data_offset": 2048, 00:08:36.303 "data_size": 63488 00:08:36.303 }, 00:08:36.303 { 00:08:36.303 "name": "BaseBdev2", 00:08:36.303 "uuid": "df4e59b2-8c65-532e-9160-6ba475126c6e", 00:08:36.303 "is_configured": true, 00:08:36.304 "data_offset": 2048, 00:08:36.304 "data_size": 63488 00:08:36.304 } 00:08:36.304 ] 00:08:36.304 }' 00:08:36.304 03:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.304 03:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.873 03:17:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:36.873 03:17:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:36.873 [2024-11-21 03:17:24.360135] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:08:37.813 03:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:37.813 03:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.813 03:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.813 03:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.813 03:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:37.813 03:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:37.813 03:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:37.813 03:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:37.813 03:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:37.813 03:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:37.813 03:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:37.813 03:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.813 03:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:37.813 03:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.813 03:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.813 03:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.813 03:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.813 03:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.813 03:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.813 03:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:37.813 03:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.813 03:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.813 03:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.813 "name": "raid_bdev1", 00:08:37.813 "uuid": "81bb37b8-6e16-4de6-a11d-c89a9ceec28f", 00:08:37.813 "strip_size_kb": 64, 00:08:37.813 "state": "online", 00:08:37.813 "raid_level": "concat", 00:08:37.813 "superblock": true, 00:08:37.813 "num_base_bdevs": 2, 00:08:37.813 "num_base_bdevs_discovered": 2, 00:08:37.813 "num_base_bdevs_operational": 2, 00:08:37.813 "base_bdevs_list": [ 00:08:37.813 { 00:08:37.813 "name": "BaseBdev1", 00:08:37.813 "uuid": "b8ce5be3-0961-5c75-bb4d-6b2b6411c623", 00:08:37.813 "is_configured": true, 00:08:37.813 "data_offset": 2048, 00:08:37.813 "data_size": 63488 00:08:37.813 }, 00:08:37.813 { 00:08:37.813 "name": "BaseBdev2", 00:08:37.813 "uuid": "df4e59b2-8c65-532e-9160-6ba475126c6e", 00:08:37.813 "is_configured": true, 00:08:37.813 "data_offset": 2048, 00:08:37.813 "data_size": 63488 00:08:37.813 } 00:08:37.813 ] 00:08:37.813 }' 00:08:37.813 03:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.813 03:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.381 03:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:38.381 03:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.381 03:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.381 [2024-11-21 03:17:25.678821] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:38.381 [2024-11-21 03:17:25.678879] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:38.381 [2024-11-21 03:17:25.681770] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:38.381 [2024-11-21 03:17:25.681838] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:38.381 [2024-11-21 03:17:25.681873] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:38.381 [2024-11-21 03:17:25.681914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:08:38.381 { 00:08:38.381 "results": [ 00:08:38.381 { 00:08:38.381 "job": "raid_bdev1", 00:08:38.381 "core_mask": "0x1", 00:08:38.381 "workload": "randrw", 00:08:38.381 "percentage": 50, 00:08:38.381 "status": "finished", 00:08:38.381 "queue_depth": 1, 00:08:38.381 "io_size": 131072, 00:08:38.381 "runtime": 1.316503, 00:08:38.381 "iops": 14973.00044132068, 00:08:38.381 "mibps": 1871.625055165085, 00:08:38.381 "io_failed": 1, 00:08:38.381 "io_timeout": 0, 00:08:38.381 "avg_latency_us": 92.63193151365607, 00:08:38.381 "min_latency_us": 28.00313780179552, 00:08:38.381 "max_latency_us": 1528.011853917894 00:08:38.381 } 00:08:38.381 ], 00:08:38.381 "core_count": 1 00:08:38.381 } 00:08:38.382 03:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.382 03:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75852 00:08:38.382 03:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75852 ']' 00:08:38.382 03:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75852 00:08:38.382 03:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:38.382 03:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:38.382 03:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75852 00:08:38.382 killing process with pid 75852 00:08:38.382 03:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:38.382 03:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:38.382 03:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75852' 00:08:38.382 03:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75852 00:08:38.382 [2024-11-21 03:17:25.723701] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:38.382 03:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75852 00:08:38.382 [2024-11-21 03:17:25.740331] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:38.641 03:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.my9pXribL8 00:08:38.641 03:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:38.641 03:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:38.641 03:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:08:38.641 03:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:38.641 03:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:38.641 03:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:38.641 ************************************ 00:08:38.641 END TEST raid_write_error_test 00:08:38.641 ************************************ 00:08:38.641 03:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:08:38.641 00:08:38.641 real 0m3.223s 00:08:38.641 user 0m4.094s 00:08:38.641 sys 0m0.545s 00:08:38.641 03:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.641 03:17:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.641 03:17:26 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:38.641 03:17:26 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:38.641 03:17:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:38.641 03:17:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.641 03:17:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:38.641 ************************************ 00:08:38.641 START TEST raid_state_function_test 00:08:38.641 ************************************ 00:08:38.641 03:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:08:38.641 03:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:38.641 03:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:38.641 03:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:38.641 03:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:38.641 03:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:38.641 03:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:38.641 03:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:38.641 03:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:38.641 03:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:38.641 03:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:38.641 03:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:38.641 03:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:38.641 03:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:38.641 03:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:38.641 03:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:38.641 03:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:38.641 03:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:38.641 03:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:38.641 03:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:38.641 03:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:38.641 03:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:38.641 03:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:38.641 03:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=75979 00:08:38.641 03:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:38.641 03:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75979' 00:08:38.641 Process raid pid: 75979 00:08:38.641 03:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 75979 00:08:38.641 03:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 75979 ']' 00:08:38.641 03:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.641 03:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.641 03:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.641 03:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.641 03:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.641 [2024-11-21 03:17:26.143371] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:08:38.641 [2024-11-21 03:17:26.143590] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.900 [2024-11-21 03:17:26.287163] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:38.900 [2024-11-21 03:17:26.326929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.900 [2024-11-21 03:17:26.357972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.900 [2024-11-21 03:17:26.401716] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.900 [2024-11-21 03:17:26.401836] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.468 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.468 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:39.468 03:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:39.468 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.468 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.468 [2024-11-21 03:17:27.021034] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:39.468 [2024-11-21 03:17:27.021110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:39.468 [2024-11-21 03:17:27.021125] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:39.468 [2024-11-21 03:17:27.021134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:39.468 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.468 03:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:39.468 03:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.468 03:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.468 03:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:39.468 03:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:39.468 03:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:39.468 03:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.469 03:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.469 03:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.469 03:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.469 03:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.469 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.469 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.728 03:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.728 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.728 03:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.728 "name": "Existed_Raid", 00:08:39.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.728 "strip_size_kb": 0, 00:08:39.728 "state": "configuring", 00:08:39.728 "raid_level": "raid1", 00:08:39.728 "superblock": false, 00:08:39.728 "num_base_bdevs": 2, 00:08:39.728 "num_base_bdevs_discovered": 0, 00:08:39.728 "num_base_bdevs_operational": 2, 00:08:39.728 "base_bdevs_list": [ 00:08:39.728 { 00:08:39.728 "name": "BaseBdev1", 00:08:39.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.728 "is_configured": false, 00:08:39.728 "data_offset": 0, 00:08:39.728 "data_size": 0 00:08:39.728 }, 00:08:39.728 { 00:08:39.728 "name": "BaseBdev2", 00:08:39.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.728 "is_configured": false, 00:08:39.728 "data_offset": 0, 00:08:39.728 "data_size": 0 00:08:39.728 } 00:08:39.728 ] 00:08:39.728 }' 00:08:39.728 03:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.728 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.988 03:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:39.988 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.988 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.988 [2024-11-21 03:17:27.493057] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:39.988 [2024-11-21 03:17:27.493206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:08:39.988 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.988 03:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:39.988 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.988 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.988 [2024-11-21 03:17:27.505108] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:39.988 [2024-11-21 03:17:27.505260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:39.988 [2024-11-21 03:17:27.505299] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:39.988 [2024-11-21 03:17:27.505329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:39.988 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.988 03:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:39.988 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.988 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.988 [2024-11-21 03:17:27.526449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:39.988 BaseBdev1 00:08:39.988 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.988 03:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:39.988 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:39.988 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:39.988 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:39.988 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:39.988 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:39.988 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:39.988 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.989 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.989 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.989 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:39.989 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.989 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.248 [ 00:08:40.248 { 00:08:40.248 "name": "BaseBdev1", 00:08:40.248 "aliases": [ 00:08:40.248 "339a4d34-c694-4a9a-929d-a243e3000752" 00:08:40.248 ], 00:08:40.248 "product_name": "Malloc disk", 00:08:40.248 "block_size": 512, 00:08:40.248 "num_blocks": 65536, 00:08:40.248 "uuid": "339a4d34-c694-4a9a-929d-a243e3000752", 00:08:40.248 "assigned_rate_limits": { 00:08:40.248 "rw_ios_per_sec": 0, 00:08:40.248 "rw_mbytes_per_sec": 0, 00:08:40.248 "r_mbytes_per_sec": 0, 00:08:40.248 "w_mbytes_per_sec": 0 00:08:40.248 }, 00:08:40.248 "claimed": true, 00:08:40.248 "claim_type": "exclusive_write", 00:08:40.248 "zoned": false, 00:08:40.248 "supported_io_types": { 00:08:40.248 "read": true, 00:08:40.248 "write": true, 00:08:40.248 "unmap": true, 00:08:40.248 "flush": true, 00:08:40.248 "reset": true, 00:08:40.248 "nvme_admin": false, 00:08:40.248 "nvme_io": false, 00:08:40.248 "nvme_io_md": false, 00:08:40.248 "write_zeroes": true, 00:08:40.248 "zcopy": true, 00:08:40.248 "get_zone_info": false, 00:08:40.248 "zone_management": false, 00:08:40.248 "zone_append": false, 00:08:40.248 "compare": false, 00:08:40.248 "compare_and_write": false, 00:08:40.248 "abort": true, 00:08:40.248 "seek_hole": false, 00:08:40.248 "seek_data": false, 00:08:40.248 "copy": true, 00:08:40.248 "nvme_iov_md": false 00:08:40.248 }, 00:08:40.248 "memory_domains": [ 00:08:40.248 { 00:08:40.248 "dma_device_id": "system", 00:08:40.248 "dma_device_type": 1 00:08:40.248 }, 00:08:40.248 { 00:08:40.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.248 "dma_device_type": 2 00:08:40.248 } 00:08:40.248 ], 00:08:40.248 "driver_specific": {} 00:08:40.248 } 00:08:40.248 ] 00:08:40.248 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.248 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:40.248 03:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:40.248 03:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.248 03:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.248 03:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:40.248 03:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:40.248 03:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:40.248 03:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.248 03:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.248 03:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.248 03:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.248 03:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.248 03:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.248 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.248 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.248 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.248 03:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.248 "name": "Existed_Raid", 00:08:40.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.248 "strip_size_kb": 0, 00:08:40.248 "state": "configuring", 00:08:40.248 "raid_level": "raid1", 00:08:40.248 "superblock": false, 00:08:40.248 "num_base_bdevs": 2, 00:08:40.248 "num_base_bdevs_discovered": 1, 00:08:40.248 "num_base_bdevs_operational": 2, 00:08:40.248 "base_bdevs_list": [ 00:08:40.248 { 00:08:40.248 "name": "BaseBdev1", 00:08:40.248 "uuid": "339a4d34-c694-4a9a-929d-a243e3000752", 00:08:40.248 "is_configured": true, 00:08:40.248 "data_offset": 0, 00:08:40.248 "data_size": 65536 00:08:40.248 }, 00:08:40.248 { 00:08:40.248 "name": "BaseBdev2", 00:08:40.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.248 "is_configured": false, 00:08:40.248 "data_offset": 0, 00:08:40.248 "data_size": 0 00:08:40.248 } 00:08:40.248 ] 00:08:40.248 }' 00:08:40.249 03:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.249 03:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.508 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:40.508 03:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.508 03:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.508 [2024-11-21 03:17:28.018669] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:40.508 [2024-11-21 03:17:28.018757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:40.508 03:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.508 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:40.508 03:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.508 03:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.508 [2024-11-21 03:17:28.030719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:40.508 [2024-11-21 03:17:28.032911] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:40.508 [2024-11-21 03:17:28.032970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:40.508 03:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.508 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:40.508 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:40.508 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:40.508 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.508 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.509 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:40.509 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:40.509 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:40.509 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.509 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.509 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.509 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.509 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.509 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.509 03:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.509 03:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.509 03:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.768 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.768 "name": "Existed_Raid", 00:08:40.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.768 "strip_size_kb": 0, 00:08:40.768 "state": "configuring", 00:08:40.768 "raid_level": "raid1", 00:08:40.768 "superblock": false, 00:08:40.768 "num_base_bdevs": 2, 00:08:40.768 "num_base_bdevs_discovered": 1, 00:08:40.768 "num_base_bdevs_operational": 2, 00:08:40.768 "base_bdevs_list": [ 00:08:40.768 { 00:08:40.768 "name": "BaseBdev1", 00:08:40.768 "uuid": "339a4d34-c694-4a9a-929d-a243e3000752", 00:08:40.768 "is_configured": true, 00:08:40.768 "data_offset": 0, 00:08:40.768 "data_size": 65536 00:08:40.768 }, 00:08:40.768 { 00:08:40.768 "name": "BaseBdev2", 00:08:40.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.768 "is_configured": false, 00:08:40.768 "data_offset": 0, 00:08:40.768 "data_size": 0 00:08:40.768 } 00:08:40.768 ] 00:08:40.768 }' 00:08:40.768 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.768 03:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.028 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:41.028 03:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.028 03:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.028 [2024-11-21 03:17:28.482078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:41.028 [2024-11-21 03:17:28.482222] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:41.028 [2024-11-21 03:17:28.482253] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:41.028 [2024-11-21 03:17:28.482556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:08:41.028 [2024-11-21 03:17:28.482750] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:41.028 [2024-11-21 03:17:28.482803] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:08:41.028 [2024-11-21 03:17:28.483075] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.028 BaseBdev2 00:08:41.028 03:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.028 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:41.028 03:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:41.028 03:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:41.028 03:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:41.028 03:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:41.028 03:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:41.028 03:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:41.028 03:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.028 03:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.028 03:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.028 03:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:41.028 03:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.028 03:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.028 [ 00:08:41.028 { 00:08:41.028 "name": "BaseBdev2", 00:08:41.028 "aliases": [ 00:08:41.028 "fbaa66f3-5f46-4602-a7db-b373bfaf7a9a" 00:08:41.028 ], 00:08:41.028 "product_name": "Malloc disk", 00:08:41.028 "block_size": 512, 00:08:41.028 "num_blocks": 65536, 00:08:41.028 "uuid": "fbaa66f3-5f46-4602-a7db-b373bfaf7a9a", 00:08:41.028 "assigned_rate_limits": { 00:08:41.028 "rw_ios_per_sec": 0, 00:08:41.028 "rw_mbytes_per_sec": 0, 00:08:41.028 "r_mbytes_per_sec": 0, 00:08:41.028 "w_mbytes_per_sec": 0 00:08:41.028 }, 00:08:41.028 "claimed": true, 00:08:41.028 "claim_type": "exclusive_write", 00:08:41.028 "zoned": false, 00:08:41.028 "supported_io_types": { 00:08:41.028 "read": true, 00:08:41.028 "write": true, 00:08:41.028 "unmap": true, 00:08:41.028 "flush": true, 00:08:41.028 "reset": true, 00:08:41.028 "nvme_admin": false, 00:08:41.028 "nvme_io": false, 00:08:41.028 "nvme_io_md": false, 00:08:41.028 "write_zeroes": true, 00:08:41.028 "zcopy": true, 00:08:41.028 "get_zone_info": false, 00:08:41.028 "zone_management": false, 00:08:41.028 "zone_append": false, 00:08:41.028 "compare": false, 00:08:41.028 "compare_and_write": false, 00:08:41.028 "abort": true, 00:08:41.028 "seek_hole": false, 00:08:41.028 "seek_data": false, 00:08:41.028 "copy": true, 00:08:41.028 "nvme_iov_md": false 00:08:41.028 }, 00:08:41.028 "memory_domains": [ 00:08:41.028 { 00:08:41.028 "dma_device_id": "system", 00:08:41.028 "dma_device_type": 1 00:08:41.028 }, 00:08:41.028 { 00:08:41.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.028 "dma_device_type": 2 00:08:41.028 } 00:08:41.028 ], 00:08:41.028 "driver_specific": {} 00:08:41.028 } 00:08:41.028 ] 00:08:41.028 03:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.028 03:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:41.028 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:41.028 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:41.028 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:41.028 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.028 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:41.028 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:41.028 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:41.028 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:41.028 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.028 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.028 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.028 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.028 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.028 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.029 03:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.029 03:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.029 03:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.029 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.029 "name": "Existed_Raid", 00:08:41.029 "uuid": "47d6ab78-828d-41a9-a963-26c9c23a93a6", 00:08:41.029 "strip_size_kb": 0, 00:08:41.029 "state": "online", 00:08:41.029 "raid_level": "raid1", 00:08:41.029 "superblock": false, 00:08:41.029 "num_base_bdevs": 2, 00:08:41.029 "num_base_bdevs_discovered": 2, 00:08:41.029 "num_base_bdevs_operational": 2, 00:08:41.029 "base_bdevs_list": [ 00:08:41.029 { 00:08:41.029 "name": "BaseBdev1", 00:08:41.029 "uuid": "339a4d34-c694-4a9a-929d-a243e3000752", 00:08:41.029 "is_configured": true, 00:08:41.029 "data_offset": 0, 00:08:41.029 "data_size": 65536 00:08:41.029 }, 00:08:41.029 { 00:08:41.029 "name": "BaseBdev2", 00:08:41.029 "uuid": "fbaa66f3-5f46-4602-a7db-b373bfaf7a9a", 00:08:41.029 "is_configured": true, 00:08:41.029 "data_offset": 0, 00:08:41.029 "data_size": 65536 00:08:41.029 } 00:08:41.029 ] 00:08:41.029 }' 00:08:41.029 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.029 03:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.599 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:41.599 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:41.599 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:41.599 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:41.599 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:41.599 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:41.599 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:41.599 03:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.599 03:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.599 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:41.599 [2024-11-21 03:17:28.946639] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:41.599 03:17:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.599 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:41.599 "name": "Existed_Raid", 00:08:41.599 "aliases": [ 00:08:41.599 "47d6ab78-828d-41a9-a963-26c9c23a93a6" 00:08:41.599 ], 00:08:41.599 "product_name": "Raid Volume", 00:08:41.599 "block_size": 512, 00:08:41.599 "num_blocks": 65536, 00:08:41.599 "uuid": "47d6ab78-828d-41a9-a963-26c9c23a93a6", 00:08:41.599 "assigned_rate_limits": { 00:08:41.599 "rw_ios_per_sec": 0, 00:08:41.599 "rw_mbytes_per_sec": 0, 00:08:41.599 "r_mbytes_per_sec": 0, 00:08:41.599 "w_mbytes_per_sec": 0 00:08:41.599 }, 00:08:41.599 "claimed": false, 00:08:41.599 "zoned": false, 00:08:41.599 "supported_io_types": { 00:08:41.599 "read": true, 00:08:41.599 "write": true, 00:08:41.599 "unmap": false, 00:08:41.599 "flush": false, 00:08:41.599 "reset": true, 00:08:41.599 "nvme_admin": false, 00:08:41.599 "nvme_io": false, 00:08:41.599 "nvme_io_md": false, 00:08:41.599 "write_zeroes": true, 00:08:41.599 "zcopy": false, 00:08:41.599 "get_zone_info": false, 00:08:41.599 "zone_management": false, 00:08:41.599 "zone_append": false, 00:08:41.599 "compare": false, 00:08:41.599 "compare_and_write": false, 00:08:41.599 "abort": false, 00:08:41.599 "seek_hole": false, 00:08:41.599 "seek_data": false, 00:08:41.599 "copy": false, 00:08:41.599 "nvme_iov_md": false 00:08:41.599 }, 00:08:41.599 "memory_domains": [ 00:08:41.599 { 00:08:41.599 "dma_device_id": "system", 00:08:41.599 "dma_device_type": 1 00:08:41.599 }, 00:08:41.599 { 00:08:41.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.599 "dma_device_type": 2 00:08:41.599 }, 00:08:41.599 { 00:08:41.599 "dma_device_id": "system", 00:08:41.599 "dma_device_type": 1 00:08:41.599 }, 00:08:41.599 { 00:08:41.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.599 "dma_device_type": 2 00:08:41.599 } 00:08:41.599 ], 00:08:41.599 "driver_specific": { 00:08:41.599 "raid": { 00:08:41.599 "uuid": "47d6ab78-828d-41a9-a963-26c9c23a93a6", 00:08:41.599 "strip_size_kb": 0, 00:08:41.599 "state": "online", 00:08:41.599 "raid_level": "raid1", 00:08:41.599 "superblock": false, 00:08:41.599 "num_base_bdevs": 2, 00:08:41.599 "num_base_bdevs_discovered": 2, 00:08:41.599 "num_base_bdevs_operational": 2, 00:08:41.599 "base_bdevs_list": [ 00:08:41.599 { 00:08:41.599 "name": "BaseBdev1", 00:08:41.599 "uuid": "339a4d34-c694-4a9a-929d-a243e3000752", 00:08:41.599 "is_configured": true, 00:08:41.599 "data_offset": 0, 00:08:41.599 "data_size": 65536 00:08:41.599 }, 00:08:41.599 { 00:08:41.599 "name": "BaseBdev2", 00:08:41.599 "uuid": "fbaa66f3-5f46-4602-a7db-b373bfaf7a9a", 00:08:41.599 "is_configured": true, 00:08:41.599 "data_offset": 0, 00:08:41.599 "data_size": 65536 00:08:41.599 } 00:08:41.599 ] 00:08:41.599 } 00:08:41.599 } 00:08:41.599 }' 00:08:41.599 03:17:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:41.599 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:41.599 BaseBdev2' 00:08:41.599 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.599 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:41.599 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.599 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:41.599 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.599 03:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.599 03:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.599 03:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.599 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.599 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.599 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.599 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:41.599 03:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.599 03:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.599 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.599 03:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.599 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.599 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.599 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:41.599 03:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.599 03:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.599 [2024-11-21 03:17:29.158464] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:41.860 03:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.860 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:41.860 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:41.860 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:41.860 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:41.860 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:41.860 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:41.860 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.860 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:41.860 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:41.860 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:41.860 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:41.860 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.860 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.860 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.860 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.860 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.860 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.860 03:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.860 03:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.860 03:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.860 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.860 "name": "Existed_Raid", 00:08:41.860 "uuid": "47d6ab78-828d-41a9-a963-26c9c23a93a6", 00:08:41.860 "strip_size_kb": 0, 00:08:41.860 "state": "online", 00:08:41.860 "raid_level": "raid1", 00:08:41.860 "superblock": false, 00:08:41.860 "num_base_bdevs": 2, 00:08:41.860 "num_base_bdevs_discovered": 1, 00:08:41.860 "num_base_bdevs_operational": 1, 00:08:41.860 "base_bdevs_list": [ 00:08:41.860 { 00:08:41.860 "name": null, 00:08:41.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.860 "is_configured": false, 00:08:41.860 "data_offset": 0, 00:08:41.860 "data_size": 65536 00:08:41.860 }, 00:08:41.860 { 00:08:41.860 "name": "BaseBdev2", 00:08:41.860 "uuid": "fbaa66f3-5f46-4602-a7db-b373bfaf7a9a", 00:08:41.860 "is_configured": true, 00:08:41.860 "data_offset": 0, 00:08:41.860 "data_size": 65536 00:08:41.860 } 00:08:41.860 ] 00:08:41.860 }' 00:08:41.860 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.860 03:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.122 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:42.122 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:42.122 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.122 03:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.122 03:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.122 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:42.122 03:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.122 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:42.122 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:42.122 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:42.122 03:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.122 03:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.122 [2024-11-21 03:17:29.650465] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:42.122 [2024-11-21 03:17:29.650716] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:42.122 [2024-11-21 03:17:29.662960] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:42.122 [2024-11-21 03:17:29.663099] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:42.122 [2024-11-21 03:17:29.663117] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:08:42.122 03:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.122 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:42.122 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:42.122 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:42.122 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.122 03:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.122 03:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.122 03:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.383 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:42.383 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:42.383 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:42.383 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 75979 00:08:42.383 03:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 75979 ']' 00:08:42.383 03:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 75979 00:08:42.383 03:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:42.383 03:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.383 03:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75979 00:08:42.383 killing process with pid 75979 00:08:42.383 03:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:42.383 03:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:42.383 03:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75979' 00:08:42.383 03:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 75979 00:08:42.383 [2024-11-21 03:17:29.755386] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:42.383 03:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 75979 00:08:42.383 [2024-11-21 03:17:29.756446] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:42.643 03:17:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:42.643 00:08:42.643 real 0m3.930s 00:08:42.643 user 0m6.161s 00:08:42.643 sys 0m0.803s 00:08:42.643 03:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.643 03:17:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.643 ************************************ 00:08:42.643 END TEST raid_state_function_test 00:08:42.643 ************************************ 00:08:42.643 03:17:30 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:42.643 03:17:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:42.643 03:17:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.643 03:17:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:42.643 ************************************ 00:08:42.643 START TEST raid_state_function_test_sb 00:08:42.643 ************************************ 00:08:42.643 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:08:42.643 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:42.643 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:42.643 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:42.643 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:42.643 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:42.643 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:42.643 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:42.643 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:42.643 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:42.643 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:42.643 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:42.643 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:42.643 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:42.643 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:42.643 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:42.643 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:42.643 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:42.643 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:42.643 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:42.643 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:42.643 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:42.643 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:42.643 Process raid pid: 76221 00:08:42.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.643 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=76221 00:08:42.643 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 76221' 00:08:42.643 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 76221 00:08:42.643 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 76221 ']' 00:08:42.643 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.643 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.643 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.643 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.643 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.643 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:42.643 [2024-11-21 03:17:30.130657] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:08:42.643 [2024-11-21 03:17:30.130955] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.903 [2024-11-21 03:17:30.271874] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:42.903 [2024-11-21 03:17:30.296861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.903 [2024-11-21 03:17:30.327298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.903 [2024-11-21 03:17:30.370739] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:42.903 [2024-11-21 03:17:30.370854] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.472 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.472 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:43.472 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:43.472 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.472 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.472 [2024-11-21 03:17:30.985861] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:43.472 [2024-11-21 03:17:30.986027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:43.472 [2024-11-21 03:17:30.986063] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:43.472 [2024-11-21 03:17:30.986087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:43.472 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.472 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:43.472 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.472 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.472 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:43.472 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:43.472 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:43.472 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.472 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.472 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.472 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.472 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.472 03:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.472 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.472 03:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.472 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.731 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.731 "name": "Existed_Raid", 00:08:43.731 "uuid": "15d1551b-c3bf-4e8e-a923-9b5fee8c2591", 00:08:43.731 "strip_size_kb": 0, 00:08:43.731 "state": "configuring", 00:08:43.731 "raid_level": "raid1", 00:08:43.731 "superblock": true, 00:08:43.731 "num_base_bdevs": 2, 00:08:43.731 "num_base_bdevs_discovered": 0, 00:08:43.731 "num_base_bdevs_operational": 2, 00:08:43.731 "base_bdevs_list": [ 00:08:43.731 { 00:08:43.731 "name": "BaseBdev1", 00:08:43.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.731 "is_configured": false, 00:08:43.731 "data_offset": 0, 00:08:43.731 "data_size": 0 00:08:43.731 }, 00:08:43.731 { 00:08:43.731 "name": "BaseBdev2", 00:08:43.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.731 "is_configured": false, 00:08:43.731 "data_offset": 0, 00:08:43.731 "data_size": 0 00:08:43.731 } 00:08:43.731 ] 00:08:43.732 }' 00:08:43.732 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.732 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.991 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:43.991 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.991 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.991 [2024-11-21 03:17:31.437891] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:43.991 [2024-11-21 03:17:31.438054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:08:43.991 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.991 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:43.991 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.991 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.991 [2024-11-21 03:17:31.449936] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:43.991 [2024-11-21 03:17:31.449991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:43.991 [2024-11-21 03:17:31.450003] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:43.991 [2024-11-21 03:17:31.450011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:43.991 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.991 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:43.991 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.991 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.991 [2024-11-21 03:17:31.471194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:43.991 BaseBdev1 00:08:43.991 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.991 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:43.991 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:43.991 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:43.992 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:43.992 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:43.992 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:43.992 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:43.992 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.992 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.992 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.992 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:43.992 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.992 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.992 [ 00:08:43.992 { 00:08:43.992 "name": "BaseBdev1", 00:08:43.992 "aliases": [ 00:08:43.992 "022e8d4a-9acf-4d30-9efb-174388c8b247" 00:08:43.992 ], 00:08:43.992 "product_name": "Malloc disk", 00:08:43.992 "block_size": 512, 00:08:43.992 "num_blocks": 65536, 00:08:43.992 "uuid": "022e8d4a-9acf-4d30-9efb-174388c8b247", 00:08:43.992 "assigned_rate_limits": { 00:08:43.992 "rw_ios_per_sec": 0, 00:08:43.992 "rw_mbytes_per_sec": 0, 00:08:43.992 "r_mbytes_per_sec": 0, 00:08:43.992 "w_mbytes_per_sec": 0 00:08:43.992 }, 00:08:43.992 "claimed": true, 00:08:43.992 "claim_type": "exclusive_write", 00:08:43.992 "zoned": false, 00:08:43.992 "supported_io_types": { 00:08:43.992 "read": true, 00:08:43.992 "write": true, 00:08:43.992 "unmap": true, 00:08:43.992 "flush": true, 00:08:43.992 "reset": true, 00:08:43.992 "nvme_admin": false, 00:08:43.992 "nvme_io": false, 00:08:43.992 "nvme_io_md": false, 00:08:43.992 "write_zeroes": true, 00:08:43.992 "zcopy": true, 00:08:43.992 "get_zone_info": false, 00:08:43.992 "zone_management": false, 00:08:43.992 "zone_append": false, 00:08:43.992 "compare": false, 00:08:43.992 "compare_and_write": false, 00:08:43.992 "abort": true, 00:08:43.992 "seek_hole": false, 00:08:43.992 "seek_data": false, 00:08:43.992 "copy": true, 00:08:43.992 "nvme_iov_md": false 00:08:43.992 }, 00:08:43.992 "memory_domains": [ 00:08:43.992 { 00:08:43.992 "dma_device_id": "system", 00:08:43.992 "dma_device_type": 1 00:08:43.992 }, 00:08:43.992 { 00:08:43.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.992 "dma_device_type": 2 00:08:43.992 } 00:08:43.992 ], 00:08:43.992 "driver_specific": {} 00:08:43.992 } 00:08:43.992 ] 00:08:43.992 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.992 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:43.992 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:43.992 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.992 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.992 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:43.992 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:43.992 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:43.992 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.992 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.992 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.992 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.992 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.992 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.992 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.992 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.992 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.251 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.251 "name": "Existed_Raid", 00:08:44.251 "uuid": "f443b325-fa52-4411-add3-5c91c890878d", 00:08:44.251 "strip_size_kb": 0, 00:08:44.251 "state": "configuring", 00:08:44.251 "raid_level": "raid1", 00:08:44.251 "superblock": true, 00:08:44.251 "num_base_bdevs": 2, 00:08:44.251 "num_base_bdevs_discovered": 1, 00:08:44.251 "num_base_bdevs_operational": 2, 00:08:44.251 "base_bdevs_list": [ 00:08:44.251 { 00:08:44.251 "name": "BaseBdev1", 00:08:44.251 "uuid": "022e8d4a-9acf-4d30-9efb-174388c8b247", 00:08:44.251 "is_configured": true, 00:08:44.251 "data_offset": 2048, 00:08:44.251 "data_size": 63488 00:08:44.251 }, 00:08:44.251 { 00:08:44.251 "name": "BaseBdev2", 00:08:44.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.251 "is_configured": false, 00:08:44.251 "data_offset": 0, 00:08:44.251 "data_size": 0 00:08:44.251 } 00:08:44.251 ] 00:08:44.251 }' 00:08:44.251 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.251 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.511 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:44.511 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.511 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.512 [2024-11-21 03:17:31.967415] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:44.512 [2024-11-21 03:17:31.967514] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:44.512 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.512 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:44.512 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.512 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.512 [2024-11-21 03:17:31.979485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:44.512 [2024-11-21 03:17:31.981451] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:44.512 [2024-11-21 03:17:31.981507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:44.512 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.512 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:44.512 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:44.512 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:44.512 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.512 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.512 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:44.512 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:44.512 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:44.512 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.512 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.512 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.512 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.512 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.512 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.512 03:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.512 03:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.512 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.512 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.512 "name": "Existed_Raid", 00:08:44.512 "uuid": "c77c1d94-b07f-4e35-b257-ca517ff8a5c5", 00:08:44.512 "strip_size_kb": 0, 00:08:44.512 "state": "configuring", 00:08:44.512 "raid_level": "raid1", 00:08:44.512 "superblock": true, 00:08:44.512 "num_base_bdevs": 2, 00:08:44.512 "num_base_bdevs_discovered": 1, 00:08:44.512 "num_base_bdevs_operational": 2, 00:08:44.512 "base_bdevs_list": [ 00:08:44.512 { 00:08:44.512 "name": "BaseBdev1", 00:08:44.512 "uuid": "022e8d4a-9acf-4d30-9efb-174388c8b247", 00:08:44.512 "is_configured": true, 00:08:44.512 "data_offset": 2048, 00:08:44.512 "data_size": 63488 00:08:44.512 }, 00:08:44.512 { 00:08:44.512 "name": "BaseBdev2", 00:08:44.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.512 "is_configured": false, 00:08:44.512 "data_offset": 0, 00:08:44.512 "data_size": 0 00:08:44.512 } 00:08:44.512 ] 00:08:44.512 }' 00:08:44.512 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.512 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.081 [2024-11-21 03:17:32.418622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:45.081 [2024-11-21 03:17:32.418925] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:45.081 [2024-11-21 03:17:32.418981] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:45.081 BaseBdev2 00:08:45.081 [2024-11-21 03:17:32.419270] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:08:45.081 [2024-11-21 03:17:32.419462] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:45.081 [2024-11-21 03:17:32.419512] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.081 [2024-11-21 03:17:32.419684] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.081 [ 00:08:45.081 { 00:08:45.081 "name": "BaseBdev2", 00:08:45.081 "aliases": [ 00:08:45.081 "2ec09592-b5c5-4ffd-9bfe-56bc5d9b6610" 00:08:45.081 ], 00:08:45.081 "product_name": "Malloc disk", 00:08:45.081 "block_size": 512, 00:08:45.081 "num_blocks": 65536, 00:08:45.081 "uuid": "2ec09592-b5c5-4ffd-9bfe-56bc5d9b6610", 00:08:45.081 "assigned_rate_limits": { 00:08:45.081 "rw_ios_per_sec": 0, 00:08:45.081 "rw_mbytes_per_sec": 0, 00:08:45.081 "r_mbytes_per_sec": 0, 00:08:45.081 "w_mbytes_per_sec": 0 00:08:45.081 }, 00:08:45.081 "claimed": true, 00:08:45.081 "claim_type": "exclusive_write", 00:08:45.081 "zoned": false, 00:08:45.081 "supported_io_types": { 00:08:45.081 "read": true, 00:08:45.081 "write": true, 00:08:45.081 "unmap": true, 00:08:45.081 "flush": true, 00:08:45.081 "reset": true, 00:08:45.081 "nvme_admin": false, 00:08:45.081 "nvme_io": false, 00:08:45.081 "nvme_io_md": false, 00:08:45.081 "write_zeroes": true, 00:08:45.081 "zcopy": true, 00:08:45.081 "get_zone_info": false, 00:08:45.081 "zone_management": false, 00:08:45.081 "zone_append": false, 00:08:45.081 "compare": false, 00:08:45.081 "compare_and_write": false, 00:08:45.081 "abort": true, 00:08:45.081 "seek_hole": false, 00:08:45.081 "seek_data": false, 00:08:45.081 "copy": true, 00:08:45.081 "nvme_iov_md": false 00:08:45.081 }, 00:08:45.081 "memory_domains": [ 00:08:45.081 { 00:08:45.081 "dma_device_id": "system", 00:08:45.081 "dma_device_type": 1 00:08:45.081 }, 00:08:45.081 { 00:08:45.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.081 "dma_device_type": 2 00:08:45.081 } 00:08:45.081 ], 00:08:45.081 "driver_specific": {} 00:08:45.081 } 00:08:45.081 ] 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.081 "name": "Existed_Raid", 00:08:45.081 "uuid": "c77c1d94-b07f-4e35-b257-ca517ff8a5c5", 00:08:45.081 "strip_size_kb": 0, 00:08:45.081 "state": "online", 00:08:45.081 "raid_level": "raid1", 00:08:45.081 "superblock": true, 00:08:45.081 "num_base_bdevs": 2, 00:08:45.081 "num_base_bdevs_discovered": 2, 00:08:45.081 "num_base_bdevs_operational": 2, 00:08:45.081 "base_bdevs_list": [ 00:08:45.081 { 00:08:45.081 "name": "BaseBdev1", 00:08:45.081 "uuid": "022e8d4a-9acf-4d30-9efb-174388c8b247", 00:08:45.081 "is_configured": true, 00:08:45.081 "data_offset": 2048, 00:08:45.081 "data_size": 63488 00:08:45.081 }, 00:08:45.081 { 00:08:45.081 "name": "BaseBdev2", 00:08:45.081 "uuid": "2ec09592-b5c5-4ffd-9bfe-56bc5d9b6610", 00:08:45.081 "is_configured": true, 00:08:45.081 "data_offset": 2048, 00:08:45.081 "data_size": 63488 00:08:45.081 } 00:08:45.081 ] 00:08:45.081 }' 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.081 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.340 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:45.340 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:45.340 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:45.340 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:45.340 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:45.340 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:45.340 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:45.340 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.340 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.340 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:45.340 [2024-11-21 03:17:32.851244] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.340 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.340 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:45.340 "name": "Existed_Raid", 00:08:45.340 "aliases": [ 00:08:45.340 "c77c1d94-b07f-4e35-b257-ca517ff8a5c5" 00:08:45.340 ], 00:08:45.340 "product_name": "Raid Volume", 00:08:45.340 "block_size": 512, 00:08:45.340 "num_blocks": 63488, 00:08:45.341 "uuid": "c77c1d94-b07f-4e35-b257-ca517ff8a5c5", 00:08:45.341 "assigned_rate_limits": { 00:08:45.341 "rw_ios_per_sec": 0, 00:08:45.341 "rw_mbytes_per_sec": 0, 00:08:45.341 "r_mbytes_per_sec": 0, 00:08:45.341 "w_mbytes_per_sec": 0 00:08:45.341 }, 00:08:45.341 "claimed": false, 00:08:45.341 "zoned": false, 00:08:45.341 "supported_io_types": { 00:08:45.341 "read": true, 00:08:45.341 "write": true, 00:08:45.341 "unmap": false, 00:08:45.341 "flush": false, 00:08:45.341 "reset": true, 00:08:45.341 "nvme_admin": false, 00:08:45.341 "nvme_io": false, 00:08:45.341 "nvme_io_md": false, 00:08:45.341 "write_zeroes": true, 00:08:45.341 "zcopy": false, 00:08:45.341 "get_zone_info": false, 00:08:45.341 "zone_management": false, 00:08:45.341 "zone_append": false, 00:08:45.341 "compare": false, 00:08:45.341 "compare_and_write": false, 00:08:45.341 "abort": false, 00:08:45.341 "seek_hole": false, 00:08:45.341 "seek_data": false, 00:08:45.341 "copy": false, 00:08:45.341 "nvme_iov_md": false 00:08:45.341 }, 00:08:45.341 "memory_domains": [ 00:08:45.341 { 00:08:45.341 "dma_device_id": "system", 00:08:45.341 "dma_device_type": 1 00:08:45.341 }, 00:08:45.341 { 00:08:45.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.341 "dma_device_type": 2 00:08:45.341 }, 00:08:45.341 { 00:08:45.341 "dma_device_id": "system", 00:08:45.341 "dma_device_type": 1 00:08:45.341 }, 00:08:45.341 { 00:08:45.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.341 "dma_device_type": 2 00:08:45.341 } 00:08:45.341 ], 00:08:45.341 "driver_specific": { 00:08:45.341 "raid": { 00:08:45.341 "uuid": "c77c1d94-b07f-4e35-b257-ca517ff8a5c5", 00:08:45.341 "strip_size_kb": 0, 00:08:45.341 "state": "online", 00:08:45.341 "raid_level": "raid1", 00:08:45.341 "superblock": true, 00:08:45.341 "num_base_bdevs": 2, 00:08:45.341 "num_base_bdevs_discovered": 2, 00:08:45.341 "num_base_bdevs_operational": 2, 00:08:45.341 "base_bdevs_list": [ 00:08:45.341 { 00:08:45.341 "name": "BaseBdev1", 00:08:45.341 "uuid": "022e8d4a-9acf-4d30-9efb-174388c8b247", 00:08:45.341 "is_configured": true, 00:08:45.341 "data_offset": 2048, 00:08:45.341 "data_size": 63488 00:08:45.341 }, 00:08:45.341 { 00:08:45.341 "name": "BaseBdev2", 00:08:45.341 "uuid": "2ec09592-b5c5-4ffd-9bfe-56bc5d9b6610", 00:08:45.341 "is_configured": true, 00:08:45.341 "data_offset": 2048, 00:08:45.341 "data_size": 63488 00:08:45.341 } 00:08:45.341 ] 00:08:45.341 } 00:08:45.341 } 00:08:45.341 }' 00:08:45.341 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:45.601 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:45.601 BaseBdev2' 00:08:45.601 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.601 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:45.601 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.601 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:45.601 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.601 03:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.601 03:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.601 [2024-11-21 03:17:33.087045] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.601 "name": "Existed_Raid", 00:08:45.601 "uuid": "c77c1d94-b07f-4e35-b257-ca517ff8a5c5", 00:08:45.601 "strip_size_kb": 0, 00:08:45.601 "state": "online", 00:08:45.601 "raid_level": "raid1", 00:08:45.601 "superblock": true, 00:08:45.601 "num_base_bdevs": 2, 00:08:45.601 "num_base_bdevs_discovered": 1, 00:08:45.601 "num_base_bdevs_operational": 1, 00:08:45.601 "base_bdevs_list": [ 00:08:45.601 { 00:08:45.601 "name": null, 00:08:45.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.601 "is_configured": false, 00:08:45.601 "data_offset": 0, 00:08:45.601 "data_size": 63488 00:08:45.601 }, 00:08:45.601 { 00:08:45.601 "name": "BaseBdev2", 00:08:45.601 "uuid": "2ec09592-b5c5-4ffd-9bfe-56bc5d9b6610", 00:08:45.601 "is_configured": true, 00:08:45.601 "data_offset": 2048, 00:08:45.601 "data_size": 63488 00:08:45.601 } 00:08:45.601 ] 00:08:45.601 }' 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.601 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.169 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:46.170 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:46.170 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.170 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.170 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.170 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:46.170 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.170 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:46.170 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:46.170 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:46.170 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.170 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.170 [2024-11-21 03:17:33.591363] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:46.170 [2024-11-21 03:17:33.591585] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:46.170 [2024-11-21 03:17:33.603856] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.170 [2024-11-21 03:17:33.603996] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:46.170 [2024-11-21 03:17:33.604035] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:08:46.170 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.170 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:46.170 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:46.170 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.170 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:46.170 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.170 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.170 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.170 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:46.170 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:46.170 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:46.170 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 76221 00:08:46.170 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 76221 ']' 00:08:46.170 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 76221 00:08:46.170 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:46.170 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:46.170 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76221 00:08:46.170 killing process with pid 76221 00:08:46.170 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:46.170 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:46.170 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76221' 00:08:46.170 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 76221 00:08:46.170 [2024-11-21 03:17:33.696616] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:46.170 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 76221 00:08:46.170 [2024-11-21 03:17:33.697731] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:46.428 03:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:46.428 00:08:46.428 real 0m3.888s 00:08:46.428 user 0m6.118s 00:08:46.428 sys 0m0.786s 00:08:46.428 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.428 03:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.428 ************************************ 00:08:46.428 END TEST raid_state_function_test_sb 00:08:46.428 ************************************ 00:08:46.428 03:17:33 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:46.428 03:17:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:46.428 03:17:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.428 03:17:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:46.428 ************************************ 00:08:46.428 START TEST raid_superblock_test 00:08:46.428 ************************************ 00:08:46.428 03:17:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:08:46.428 03:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:46.428 03:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:46.428 03:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:46.428 03:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:46.428 03:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:46.428 03:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:46.428 03:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:46.428 03:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:46.428 03:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:46.428 03:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:46.428 03:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:46.428 03:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:46.428 03:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:46.428 03:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:46.428 03:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:46.687 03:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=76457 00:08:46.687 03:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:46.687 03:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 76457 00:08:46.687 03:17:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 76457 ']' 00:08:46.687 03:17:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.687 03:17:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:46.687 03:17:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.687 03:17:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:46.687 03:17:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.687 [2024-11-21 03:17:34.086873] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:08:46.687 [2024-11-21 03:17:34.087036] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76457 ] 00:08:46.687 [2024-11-21 03:17:34.229475] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:46.946 [2024-11-21 03:17:34.270084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.946 [2024-11-21 03:17:34.301470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.946 [2024-11-21 03:17:34.344833] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.946 [2024-11-21 03:17:34.344967] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.512 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:47.512 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:47.512 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:47.512 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:47.512 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:47.512 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:47.512 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:47.512 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:47.512 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:47.512 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:47.512 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:47.512 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.512 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.512 malloc1 00:08:47.512 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.512 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:47.512 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.512 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.512 [2024-11-21 03:17:35.042066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:47.512 [2024-11-21 03:17:35.042259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.512 [2024-11-21 03:17:35.042312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:47.512 [2024-11-21 03:17:35.042347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.512 [2024-11-21 03:17:35.044839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.512 [2024-11-21 03:17:35.044964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:47.512 pt1 00:08:47.512 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.512 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:47.512 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:47.512 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:47.512 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:47.512 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:47.512 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:47.512 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:47.512 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:47.512 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:47.512 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.512 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.512 malloc2 00:08:47.512 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.512 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:47.512 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.512 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.772 [2024-11-21 03:17:35.075356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:47.772 [2024-11-21 03:17:35.075447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.772 [2024-11-21 03:17:35.075470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:47.772 [2024-11-21 03:17:35.075479] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.772 [2024-11-21 03:17:35.077955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.772 [2024-11-21 03:17:35.078111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:47.772 pt2 00:08:47.772 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.772 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:47.772 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:47.772 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:47.772 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.772 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.772 [2024-11-21 03:17:35.087396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:47.772 [2024-11-21 03:17:35.089576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:47.772 [2024-11-21 03:17:35.089847] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:08:47.772 [2024-11-21 03:17:35.089866] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:47.772 [2024-11-21 03:17:35.090223] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:08:47.772 [2024-11-21 03:17:35.090380] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:08:47.772 [2024-11-21 03:17:35.090395] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:08:47.772 [2024-11-21 03:17:35.090568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.772 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.772 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:47.772 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:47.772 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:47.772 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:47.772 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:47.772 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:47.772 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.772 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.772 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.772 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.772 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.772 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:47.772 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.772 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.772 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.772 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.772 "name": "raid_bdev1", 00:08:47.772 "uuid": "df88e99f-fc1b-4c37-af04-b586ec51e601", 00:08:47.772 "strip_size_kb": 0, 00:08:47.772 "state": "online", 00:08:47.772 "raid_level": "raid1", 00:08:47.772 "superblock": true, 00:08:47.772 "num_base_bdevs": 2, 00:08:47.772 "num_base_bdevs_discovered": 2, 00:08:47.772 "num_base_bdevs_operational": 2, 00:08:47.772 "base_bdevs_list": [ 00:08:47.772 { 00:08:47.772 "name": "pt1", 00:08:47.772 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:47.772 "is_configured": true, 00:08:47.772 "data_offset": 2048, 00:08:47.772 "data_size": 63488 00:08:47.772 }, 00:08:47.772 { 00:08:47.772 "name": "pt2", 00:08:47.772 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:47.772 "is_configured": true, 00:08:47.772 "data_offset": 2048, 00:08:47.772 "data_size": 63488 00:08:47.772 } 00:08:47.772 ] 00:08:47.772 }' 00:08:47.772 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.772 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.040 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:48.040 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:48.040 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:48.040 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:48.040 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:48.040 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:48.040 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:48.040 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:48.040 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.040 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.040 [2024-11-21 03:17:35.535907] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:48.040 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.040 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:48.040 "name": "raid_bdev1", 00:08:48.040 "aliases": [ 00:08:48.040 "df88e99f-fc1b-4c37-af04-b586ec51e601" 00:08:48.040 ], 00:08:48.040 "product_name": "Raid Volume", 00:08:48.040 "block_size": 512, 00:08:48.040 "num_blocks": 63488, 00:08:48.040 "uuid": "df88e99f-fc1b-4c37-af04-b586ec51e601", 00:08:48.040 "assigned_rate_limits": { 00:08:48.040 "rw_ios_per_sec": 0, 00:08:48.040 "rw_mbytes_per_sec": 0, 00:08:48.040 "r_mbytes_per_sec": 0, 00:08:48.040 "w_mbytes_per_sec": 0 00:08:48.040 }, 00:08:48.040 "claimed": false, 00:08:48.040 "zoned": false, 00:08:48.040 "supported_io_types": { 00:08:48.040 "read": true, 00:08:48.040 "write": true, 00:08:48.040 "unmap": false, 00:08:48.040 "flush": false, 00:08:48.040 "reset": true, 00:08:48.040 "nvme_admin": false, 00:08:48.040 "nvme_io": false, 00:08:48.040 "nvme_io_md": false, 00:08:48.040 "write_zeroes": true, 00:08:48.040 "zcopy": false, 00:08:48.040 "get_zone_info": false, 00:08:48.040 "zone_management": false, 00:08:48.040 "zone_append": false, 00:08:48.040 "compare": false, 00:08:48.040 "compare_and_write": false, 00:08:48.040 "abort": false, 00:08:48.040 "seek_hole": false, 00:08:48.040 "seek_data": false, 00:08:48.041 "copy": false, 00:08:48.041 "nvme_iov_md": false 00:08:48.041 }, 00:08:48.041 "memory_domains": [ 00:08:48.041 { 00:08:48.041 "dma_device_id": "system", 00:08:48.041 "dma_device_type": 1 00:08:48.041 }, 00:08:48.041 { 00:08:48.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.041 "dma_device_type": 2 00:08:48.041 }, 00:08:48.041 { 00:08:48.041 "dma_device_id": "system", 00:08:48.041 "dma_device_type": 1 00:08:48.041 }, 00:08:48.041 { 00:08:48.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.041 "dma_device_type": 2 00:08:48.041 } 00:08:48.041 ], 00:08:48.041 "driver_specific": { 00:08:48.041 "raid": { 00:08:48.041 "uuid": "df88e99f-fc1b-4c37-af04-b586ec51e601", 00:08:48.041 "strip_size_kb": 0, 00:08:48.041 "state": "online", 00:08:48.041 "raid_level": "raid1", 00:08:48.041 "superblock": true, 00:08:48.041 "num_base_bdevs": 2, 00:08:48.041 "num_base_bdevs_discovered": 2, 00:08:48.041 "num_base_bdevs_operational": 2, 00:08:48.041 "base_bdevs_list": [ 00:08:48.041 { 00:08:48.041 "name": "pt1", 00:08:48.041 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:48.041 "is_configured": true, 00:08:48.041 "data_offset": 2048, 00:08:48.041 "data_size": 63488 00:08:48.041 }, 00:08:48.041 { 00:08:48.041 "name": "pt2", 00:08:48.041 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:48.041 "is_configured": true, 00:08:48.041 "data_offset": 2048, 00:08:48.041 "data_size": 63488 00:08:48.041 } 00:08:48.041 ] 00:08:48.041 } 00:08:48.041 } 00:08:48.041 }' 00:08:48.041 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:48.312 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:48.312 pt2' 00:08:48.312 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.312 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:48.312 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:48.312 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:48.312 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.312 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.312 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.312 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.312 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:48.312 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:48.312 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:48.312 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:48.312 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.312 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.312 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.313 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.313 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:48.313 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:48.313 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:48.313 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:48.313 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.313 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.313 [2024-11-21 03:17:35.775869] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:48.313 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.313 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=df88e99f-fc1b-4c37-af04-b586ec51e601 00:08:48.313 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z df88e99f-fc1b-4c37-af04-b586ec51e601 ']' 00:08:48.313 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:48.313 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.313 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.313 [2024-11-21 03:17:35.819569] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:48.313 [2024-11-21 03:17:35.819616] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:48.313 [2024-11-21 03:17:35.819723] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:48.313 [2024-11-21 03:17:35.819803] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:48.313 [2024-11-21 03:17:35.819826] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:08:48.313 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.313 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.313 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.313 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.313 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:48.313 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.313 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:48.313 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:48.313 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:48.313 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:48.313 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.313 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.571 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.571 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:48.571 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:48.571 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.571 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.571 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.571 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:48.571 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:48.571 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.571 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.571 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.571 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:48.571 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:48.571 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:48.571 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:48.571 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:48.571 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.571 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:48.571 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.571 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:48.571 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.571 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.571 [2024-11-21 03:17:35.947665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:48.571 [2024-11-21 03:17:35.949859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:48.571 [2024-11-21 03:17:35.949942] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:48.571 [2024-11-21 03:17:35.950004] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:48.571 [2024-11-21 03:17:35.950036] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:48.571 [2024-11-21 03:17:35.950049] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:08:48.571 request: 00:08:48.571 { 00:08:48.571 "name": "raid_bdev1", 00:08:48.571 "raid_level": "raid1", 00:08:48.571 "base_bdevs": [ 00:08:48.571 "malloc1", 00:08:48.571 "malloc2" 00:08:48.571 ], 00:08:48.571 "superblock": false, 00:08:48.571 "method": "bdev_raid_create", 00:08:48.571 "req_id": 1 00:08:48.571 } 00:08:48.571 Got JSON-RPC error response 00:08:48.571 response: 00:08:48.571 { 00:08:48.571 "code": -17, 00:08:48.571 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:48.571 } 00:08:48.571 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:48.571 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:48.571 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:48.571 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:48.571 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:48.571 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.571 03:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:48.571 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.571 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.571 03:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.572 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:48.572 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:48.572 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:48.572 03:17:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.572 03:17:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.572 [2024-11-21 03:17:36.015658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:48.572 [2024-11-21 03:17:36.015844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.572 [2024-11-21 03:17:36.015884] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:48.572 [2024-11-21 03:17:36.015928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.572 [2024-11-21 03:17:36.018458] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.572 [2024-11-21 03:17:36.018573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:48.572 [2024-11-21 03:17:36.018692] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:48.572 [2024-11-21 03:17:36.018776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:48.572 pt1 00:08:48.572 03:17:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.572 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:48.572 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:48.572 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.572 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:48.572 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:48.572 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:48.572 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.572 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.572 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.572 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.572 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.572 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:48.572 03:17:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.572 03:17:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.572 03:17:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.572 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.572 "name": "raid_bdev1", 00:08:48.572 "uuid": "df88e99f-fc1b-4c37-af04-b586ec51e601", 00:08:48.572 "strip_size_kb": 0, 00:08:48.572 "state": "configuring", 00:08:48.572 "raid_level": "raid1", 00:08:48.572 "superblock": true, 00:08:48.572 "num_base_bdevs": 2, 00:08:48.572 "num_base_bdevs_discovered": 1, 00:08:48.572 "num_base_bdevs_operational": 2, 00:08:48.572 "base_bdevs_list": [ 00:08:48.572 { 00:08:48.572 "name": "pt1", 00:08:48.572 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:48.572 "is_configured": true, 00:08:48.572 "data_offset": 2048, 00:08:48.572 "data_size": 63488 00:08:48.572 }, 00:08:48.572 { 00:08:48.572 "name": null, 00:08:48.572 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:48.572 "is_configured": false, 00:08:48.572 "data_offset": 2048, 00:08:48.572 "data_size": 63488 00:08:48.572 } 00:08:48.572 ] 00:08:48.572 }' 00:08:48.572 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.572 03:17:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.137 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:49.137 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:49.137 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:49.137 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:49.137 03:17:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.137 03:17:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.137 [2024-11-21 03:17:36.471765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:49.137 [2024-11-21 03:17:36.471863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.137 [2024-11-21 03:17:36.471889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:49.137 [2024-11-21 03:17:36.471902] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.137 [2024-11-21 03:17:36.472386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.137 [2024-11-21 03:17:36.472432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:49.137 [2024-11-21 03:17:36.472518] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:49.137 [2024-11-21 03:17:36.472544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:49.137 [2024-11-21 03:17:36.472655] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:49.137 [2024-11-21 03:17:36.472677] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:49.137 [2024-11-21 03:17:36.472939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:49.137 [2024-11-21 03:17:36.473097] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:49.137 [2024-11-21 03:17:36.473110] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:08:49.137 [2024-11-21 03:17:36.473231] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:49.137 pt2 00:08:49.137 03:17:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.137 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:49.137 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:49.137 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:49.137 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:49.137 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.137 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:49.137 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:49.137 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:49.137 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.137 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.137 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.137 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.137 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.137 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:49.137 03:17:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.137 03:17:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.137 03:17:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.137 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.137 "name": "raid_bdev1", 00:08:49.137 "uuid": "df88e99f-fc1b-4c37-af04-b586ec51e601", 00:08:49.137 "strip_size_kb": 0, 00:08:49.137 "state": "online", 00:08:49.137 "raid_level": "raid1", 00:08:49.137 "superblock": true, 00:08:49.137 "num_base_bdevs": 2, 00:08:49.137 "num_base_bdevs_discovered": 2, 00:08:49.137 "num_base_bdevs_operational": 2, 00:08:49.137 "base_bdevs_list": [ 00:08:49.137 { 00:08:49.137 "name": "pt1", 00:08:49.137 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:49.137 "is_configured": true, 00:08:49.137 "data_offset": 2048, 00:08:49.137 "data_size": 63488 00:08:49.137 }, 00:08:49.137 { 00:08:49.137 "name": "pt2", 00:08:49.137 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:49.137 "is_configured": true, 00:08:49.137 "data_offset": 2048, 00:08:49.137 "data_size": 63488 00:08:49.137 } 00:08:49.137 ] 00:08:49.137 }' 00:08:49.137 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.137 03:17:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.396 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:49.396 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:49.396 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:49.396 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:49.396 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:49.396 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:49.396 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:49.396 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:49.396 03:17:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.396 03:17:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.396 [2024-11-21 03:17:36.912252] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:49.396 03:17:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.396 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:49.396 "name": "raid_bdev1", 00:08:49.396 "aliases": [ 00:08:49.396 "df88e99f-fc1b-4c37-af04-b586ec51e601" 00:08:49.396 ], 00:08:49.396 "product_name": "Raid Volume", 00:08:49.396 "block_size": 512, 00:08:49.396 "num_blocks": 63488, 00:08:49.396 "uuid": "df88e99f-fc1b-4c37-af04-b586ec51e601", 00:08:49.396 "assigned_rate_limits": { 00:08:49.396 "rw_ios_per_sec": 0, 00:08:49.396 "rw_mbytes_per_sec": 0, 00:08:49.396 "r_mbytes_per_sec": 0, 00:08:49.396 "w_mbytes_per_sec": 0 00:08:49.396 }, 00:08:49.396 "claimed": false, 00:08:49.396 "zoned": false, 00:08:49.396 "supported_io_types": { 00:08:49.396 "read": true, 00:08:49.396 "write": true, 00:08:49.396 "unmap": false, 00:08:49.396 "flush": false, 00:08:49.396 "reset": true, 00:08:49.396 "nvme_admin": false, 00:08:49.396 "nvme_io": false, 00:08:49.396 "nvme_io_md": false, 00:08:49.396 "write_zeroes": true, 00:08:49.396 "zcopy": false, 00:08:49.396 "get_zone_info": false, 00:08:49.396 "zone_management": false, 00:08:49.396 "zone_append": false, 00:08:49.396 "compare": false, 00:08:49.396 "compare_and_write": false, 00:08:49.396 "abort": false, 00:08:49.396 "seek_hole": false, 00:08:49.396 "seek_data": false, 00:08:49.396 "copy": false, 00:08:49.396 "nvme_iov_md": false 00:08:49.396 }, 00:08:49.396 "memory_domains": [ 00:08:49.396 { 00:08:49.396 "dma_device_id": "system", 00:08:49.396 "dma_device_type": 1 00:08:49.396 }, 00:08:49.396 { 00:08:49.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.396 "dma_device_type": 2 00:08:49.396 }, 00:08:49.396 { 00:08:49.396 "dma_device_id": "system", 00:08:49.396 "dma_device_type": 1 00:08:49.396 }, 00:08:49.396 { 00:08:49.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.396 "dma_device_type": 2 00:08:49.396 } 00:08:49.396 ], 00:08:49.396 "driver_specific": { 00:08:49.396 "raid": { 00:08:49.396 "uuid": "df88e99f-fc1b-4c37-af04-b586ec51e601", 00:08:49.396 "strip_size_kb": 0, 00:08:49.396 "state": "online", 00:08:49.396 "raid_level": "raid1", 00:08:49.396 "superblock": true, 00:08:49.396 "num_base_bdevs": 2, 00:08:49.396 "num_base_bdevs_discovered": 2, 00:08:49.396 "num_base_bdevs_operational": 2, 00:08:49.396 "base_bdevs_list": [ 00:08:49.396 { 00:08:49.396 "name": "pt1", 00:08:49.396 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:49.396 "is_configured": true, 00:08:49.396 "data_offset": 2048, 00:08:49.396 "data_size": 63488 00:08:49.396 }, 00:08:49.396 { 00:08:49.396 "name": "pt2", 00:08:49.396 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:49.396 "is_configured": true, 00:08:49.396 "data_offset": 2048, 00:08:49.396 "data_size": 63488 00:08:49.396 } 00:08:49.396 ] 00:08:49.396 } 00:08:49.396 } 00:08:49.396 }' 00:08:49.396 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:49.655 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:49.655 pt2' 00:08:49.655 03:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.655 [2024-11-21 03:17:37.140333] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' df88e99f-fc1b-4c37-af04-b586ec51e601 '!=' df88e99f-fc1b-4c37-af04-b586ec51e601 ']' 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.655 [2024-11-21 03:17:37.188066] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.655 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.656 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.656 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.656 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:49.656 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.915 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.915 "name": "raid_bdev1", 00:08:49.915 "uuid": "df88e99f-fc1b-4c37-af04-b586ec51e601", 00:08:49.915 "strip_size_kb": 0, 00:08:49.915 "state": "online", 00:08:49.915 "raid_level": "raid1", 00:08:49.915 "superblock": true, 00:08:49.915 "num_base_bdevs": 2, 00:08:49.915 "num_base_bdevs_discovered": 1, 00:08:49.915 "num_base_bdevs_operational": 1, 00:08:49.915 "base_bdevs_list": [ 00:08:49.915 { 00:08:49.915 "name": null, 00:08:49.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.915 "is_configured": false, 00:08:49.915 "data_offset": 0, 00:08:49.915 "data_size": 63488 00:08:49.915 }, 00:08:49.915 { 00:08:49.915 "name": "pt2", 00:08:49.915 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:49.915 "is_configured": true, 00:08:49.915 "data_offset": 2048, 00:08:49.915 "data_size": 63488 00:08:49.915 } 00:08:49.915 ] 00:08:49.915 }' 00:08:49.915 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.915 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.173 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:50.173 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.173 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.173 [2024-11-21 03:17:37.712175] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:50.173 [2024-11-21 03:17:37.712226] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:50.173 [2024-11-21 03:17:37.712326] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:50.173 [2024-11-21 03:17:37.712385] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:50.173 [2024-11-21 03:17:37.712399] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:08:50.173 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.173 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.173 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:50.173 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.173 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.173 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.432 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:50.432 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:50.432 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:50.432 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:50.432 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:50.432 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.432 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.432 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.432 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:50.432 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:50.432 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:50.432 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:50.432 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:50.432 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:50.432 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.432 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.433 [2024-11-21 03:17:37.772209] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:50.433 [2024-11-21 03:17:37.772302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.433 [2024-11-21 03:17:37.772322] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:50.433 [2024-11-21 03:17:37.772334] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.433 [2024-11-21 03:17:37.774830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.433 [2024-11-21 03:17:37.774896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:50.433 [2024-11-21 03:17:37.774995] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:50.433 [2024-11-21 03:17:37.775051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:50.433 [2024-11-21 03:17:37.775155] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:50.433 [2024-11-21 03:17:37.775171] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:50.433 [2024-11-21 03:17:37.775413] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:08:50.433 [2024-11-21 03:17:37.775551] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:50.433 [2024-11-21 03:17:37.775569] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:50.433 [2024-11-21 03:17:37.775706] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.433 pt2 00:08:50.433 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.433 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:50.433 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:50.433 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:50.433 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:50.433 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:50.433 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:50.433 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.433 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.433 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.433 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.433 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.433 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.433 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:50.433 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.433 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.433 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.433 "name": "raid_bdev1", 00:08:50.433 "uuid": "df88e99f-fc1b-4c37-af04-b586ec51e601", 00:08:50.433 "strip_size_kb": 0, 00:08:50.433 "state": "online", 00:08:50.433 "raid_level": "raid1", 00:08:50.433 "superblock": true, 00:08:50.433 "num_base_bdevs": 2, 00:08:50.433 "num_base_bdevs_discovered": 1, 00:08:50.433 "num_base_bdevs_operational": 1, 00:08:50.433 "base_bdevs_list": [ 00:08:50.433 { 00:08:50.433 "name": null, 00:08:50.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.433 "is_configured": false, 00:08:50.433 "data_offset": 2048, 00:08:50.433 "data_size": 63488 00:08:50.433 }, 00:08:50.433 { 00:08:50.433 "name": "pt2", 00:08:50.433 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:50.433 "is_configured": true, 00:08:50.433 "data_offset": 2048, 00:08:50.433 "data_size": 63488 00:08:50.433 } 00:08:50.433 ] 00:08:50.433 }' 00:08:50.433 03:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.433 03:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.692 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:50.692 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.692 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.692 [2024-11-21 03:17:38.208358] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:50.692 [2024-11-21 03:17:38.208511] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:50.692 [2024-11-21 03:17:38.208621] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:50.692 [2024-11-21 03:17:38.208697] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:50.692 [2024-11-21 03:17:38.208751] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:50.692 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.692 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:50.692 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.692 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.692 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.692 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.692 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:50.692 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:50.692 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:50.952 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:50.952 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.952 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.952 [2024-11-21 03:17:38.264358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:50.952 [2024-11-21 03:17:38.264541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.952 [2024-11-21 03:17:38.264587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:50.952 [2024-11-21 03:17:38.264600] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.952 [2024-11-21 03:17:38.267133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.952 [2024-11-21 03:17:38.267190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:50.952 [2024-11-21 03:17:38.267290] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:50.952 [2024-11-21 03:17:38.267323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:50.952 [2024-11-21 03:17:38.267431] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:50.952 [2024-11-21 03:17:38.267443] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:50.952 [2024-11-21 03:17:38.267465] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:08:50.952 [2024-11-21 03:17:38.267528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:50.952 [2024-11-21 03:17:38.267610] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:08:50.952 [2024-11-21 03:17:38.267619] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:50.952 [2024-11-21 03:17:38.267908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:50.952 [2024-11-21 03:17:38.268071] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:08:50.952 [2024-11-21 03:17:38.268090] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:08:50.952 [2024-11-21 03:17:38.268277] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.952 pt1 00:08:50.952 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.952 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:50.952 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:50.952 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:50.952 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:50.952 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:50.952 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:50.952 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:50.952 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.952 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.952 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.952 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.952 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.952 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:50.952 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.952 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.952 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.952 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.952 "name": "raid_bdev1", 00:08:50.952 "uuid": "df88e99f-fc1b-4c37-af04-b586ec51e601", 00:08:50.952 "strip_size_kb": 0, 00:08:50.952 "state": "online", 00:08:50.952 "raid_level": "raid1", 00:08:50.952 "superblock": true, 00:08:50.952 "num_base_bdevs": 2, 00:08:50.952 "num_base_bdevs_discovered": 1, 00:08:50.952 "num_base_bdevs_operational": 1, 00:08:50.952 "base_bdevs_list": [ 00:08:50.952 { 00:08:50.952 "name": null, 00:08:50.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.952 "is_configured": false, 00:08:50.952 "data_offset": 2048, 00:08:50.952 "data_size": 63488 00:08:50.952 }, 00:08:50.952 { 00:08:50.952 "name": "pt2", 00:08:50.952 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:50.952 "is_configured": true, 00:08:50.952 "data_offset": 2048, 00:08:50.952 "data_size": 63488 00:08:50.952 } 00:08:50.952 ] 00:08:50.952 }' 00:08:50.953 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.953 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.212 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:51.212 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:51.212 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.212 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.212 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.212 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:51.212 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:51.212 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:51.212 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.212 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.212 [2024-11-21 03:17:38.732812] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:51.212 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.212 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' df88e99f-fc1b-4c37-af04-b586ec51e601 '!=' df88e99f-fc1b-4c37-af04-b586ec51e601 ']' 00:08:51.212 03:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 76457 00:08:51.212 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 76457 ']' 00:08:51.212 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 76457 00:08:51.212 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:51.212 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:51.212 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76457 00:08:51.471 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:51.472 killing process with pid 76457 00:08:51.472 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:51.472 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76457' 00:08:51.472 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 76457 00:08:51.472 [2024-11-21 03:17:38.791189] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:51.472 [2024-11-21 03:17:38.791310] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.472 03:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 76457 00:08:51.472 [2024-11-21 03:17:38.791366] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:51.472 [2024-11-21 03:17:38.791381] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:08:51.472 [2024-11-21 03:17:38.815805] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:51.731 03:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:51.731 00:08:51.731 real 0m5.058s 00:08:51.731 user 0m8.303s 00:08:51.731 sys 0m1.059s 00:08:51.731 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.731 ************************************ 00:08:51.731 END TEST raid_superblock_test 00:08:51.731 ************************************ 00:08:51.731 03:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.731 03:17:39 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:51.731 03:17:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:51.731 03:17:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.731 03:17:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:51.731 ************************************ 00:08:51.731 START TEST raid_read_error_test 00:08:51.731 ************************************ 00:08:51.731 03:17:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:08:51.731 03:17:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:51.731 03:17:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:51.731 03:17:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:51.731 03:17:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:51.731 03:17:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:51.731 03:17:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:51.731 03:17:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:51.731 03:17:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:51.731 03:17:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:51.731 03:17:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:51.732 03:17:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:51.732 03:17:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:51.732 03:17:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:51.732 03:17:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:51.732 03:17:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:51.732 03:17:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:51.732 03:17:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:51.732 03:17:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:51.732 03:17:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:51.732 03:17:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:51.732 03:17:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:51.732 03:17:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xRd9ZcF4pu 00:08:51.732 03:17:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76776 00:08:51.732 03:17:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:51.732 03:17:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76776 00:08:51.732 03:17:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 76776 ']' 00:08:51.732 03:17:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.732 03:17:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.732 03:17:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.732 03:17:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.732 03:17:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.732 [2024-11-21 03:17:39.228531] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:08:51.732 [2024-11-21 03:17:39.228760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76776 ] 00:08:51.991 [2024-11-21 03:17:39.369607] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:51.991 [2024-11-21 03:17:39.407702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.991 [2024-11-21 03:17:39.438771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.991 [2024-11-21 03:17:39.482485] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.991 [2024-11-21 03:17:39.482532] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.560 03:17:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:52.560 03:17:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:52.560 03:17:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:52.560 03:17:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:52.560 03:17:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.560 03:17:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.560 BaseBdev1_malloc 00:08:52.560 03:17:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.560 03:17:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:52.560 03:17:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.560 03:17:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.560 true 00:08:52.560 03:17:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.560 03:17:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:52.560 03:17:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.560 03:17:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.560 [2024-11-21 03:17:40.114185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:52.560 [2024-11-21 03:17:40.114255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.560 [2024-11-21 03:17:40.114273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:52.560 [2024-11-21 03:17:40.114288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.560 [2024-11-21 03:17:40.116851] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.560 BaseBdev1 00:08:52.560 [2024-11-21 03:17:40.116957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:52.560 03:17:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.560 03:17:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:52.560 03:17:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:52.560 03:17:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.560 03:17:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.821 BaseBdev2_malloc 00:08:52.821 03:17:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.821 03:17:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:52.821 03:17:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.821 03:17:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.821 true 00:08:52.821 03:17:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.821 03:17:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:52.821 03:17:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.821 03:17:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.821 [2024-11-21 03:17:40.161681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:52.821 [2024-11-21 03:17:40.161740] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.821 [2024-11-21 03:17:40.161758] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:52.821 [2024-11-21 03:17:40.161770] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.821 [2024-11-21 03:17:40.164255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.821 [2024-11-21 03:17:40.164292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:52.821 BaseBdev2 00:08:52.821 03:17:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.821 03:17:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:52.821 03:17:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.821 03:17:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.821 [2024-11-21 03:17:40.173699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:52.821 [2024-11-21 03:17:40.175963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:52.821 [2024-11-21 03:17:40.176236] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:52.821 [2024-11-21 03:17:40.176256] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:52.821 [2024-11-21 03:17:40.176545] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:08:52.821 [2024-11-21 03:17:40.176738] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:52.821 [2024-11-21 03:17:40.176748] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:08:52.821 [2024-11-21 03:17:40.176909] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:52.821 03:17:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.821 03:17:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:52.821 03:17:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:52.821 03:17:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:52.821 03:17:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:52.821 03:17:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:52.821 03:17:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:52.821 03:17:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.821 03:17:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.821 03:17:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.821 03:17:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.821 03:17:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.821 03:17:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.821 03:17:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.821 03:17:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:52.821 03:17:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.821 03:17:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.821 "name": "raid_bdev1", 00:08:52.821 "uuid": "8bf45691-80db-4a05-b2db-a3d5e1e30634", 00:08:52.821 "strip_size_kb": 0, 00:08:52.821 "state": "online", 00:08:52.821 "raid_level": "raid1", 00:08:52.821 "superblock": true, 00:08:52.821 "num_base_bdevs": 2, 00:08:52.821 "num_base_bdevs_discovered": 2, 00:08:52.821 "num_base_bdevs_operational": 2, 00:08:52.821 "base_bdevs_list": [ 00:08:52.821 { 00:08:52.821 "name": "BaseBdev1", 00:08:52.821 "uuid": "b0cc07fc-de10-59d6-b365-0632cdc07788", 00:08:52.821 "is_configured": true, 00:08:52.821 "data_offset": 2048, 00:08:52.821 "data_size": 63488 00:08:52.821 }, 00:08:52.821 { 00:08:52.821 "name": "BaseBdev2", 00:08:52.821 "uuid": "f14e1775-692d-5569-ba25-a6c9986985c1", 00:08:52.821 "is_configured": true, 00:08:52.821 "data_offset": 2048, 00:08:52.821 "data_size": 63488 00:08:52.821 } 00:08:52.821 ] 00:08:52.821 }' 00:08:52.821 03:17:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.821 03:17:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.081 03:17:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:53.081 03:17:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:53.341 [2024-11-21 03:17:40.714396] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:08:54.281 03:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:54.281 03:17:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.281 03:17:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.281 03:17:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.281 03:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:54.281 03:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:54.281 03:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:54.281 03:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:54.281 03:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:54.281 03:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:54.281 03:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.281 03:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:54.281 03:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:54.281 03:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:54.281 03:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.281 03:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.281 03:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.281 03:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.281 03:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.281 03:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:54.281 03:17:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.281 03:17:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.281 03:17:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.281 03:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.281 "name": "raid_bdev1", 00:08:54.281 "uuid": "8bf45691-80db-4a05-b2db-a3d5e1e30634", 00:08:54.281 "strip_size_kb": 0, 00:08:54.281 "state": "online", 00:08:54.281 "raid_level": "raid1", 00:08:54.281 "superblock": true, 00:08:54.281 "num_base_bdevs": 2, 00:08:54.281 "num_base_bdevs_discovered": 2, 00:08:54.281 "num_base_bdevs_operational": 2, 00:08:54.281 "base_bdevs_list": [ 00:08:54.281 { 00:08:54.281 "name": "BaseBdev1", 00:08:54.281 "uuid": "b0cc07fc-de10-59d6-b365-0632cdc07788", 00:08:54.281 "is_configured": true, 00:08:54.281 "data_offset": 2048, 00:08:54.281 "data_size": 63488 00:08:54.281 }, 00:08:54.281 { 00:08:54.281 "name": "BaseBdev2", 00:08:54.281 "uuid": "f14e1775-692d-5569-ba25-a6c9986985c1", 00:08:54.281 "is_configured": true, 00:08:54.281 "data_offset": 2048, 00:08:54.281 "data_size": 63488 00:08:54.281 } 00:08:54.281 ] 00:08:54.281 }' 00:08:54.281 03:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.281 03:17:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.541 03:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:54.541 03:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.541 03:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.541 [2024-11-21 03:17:42.079523] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:54.541 [2024-11-21 03:17:42.079572] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:54.541 [2024-11-21 03:17:42.082311] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:54.541 [2024-11-21 03:17:42.082404] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.541 [2024-11-21 03:17:42.082537] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:54.541 [2024-11-21 03:17:42.082594] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:08:54.541 { 00:08:54.541 "results": [ 00:08:54.541 { 00:08:54.541 "job": "raid_bdev1", 00:08:54.541 "core_mask": "0x1", 00:08:54.541 "workload": "randrw", 00:08:54.541 "percentage": 50, 00:08:54.541 "status": "finished", 00:08:54.541 "queue_depth": 1, 00:08:54.541 "io_size": 131072, 00:08:54.541 "runtime": 1.362577, 00:08:54.541 "iops": 14435.881421747175, 00:08:54.541 "mibps": 1804.485177718397, 00:08:54.541 "io_failed": 0, 00:08:54.541 "io_timeout": 0, 00:08:54.541 "avg_latency_us": 66.58034045672359, 00:08:54.541 "min_latency_us": 23.428920073215377, 00:08:54.541 "max_latency_us": 1506.5911269938115 00:08:54.541 } 00:08:54.541 ], 00:08:54.541 "core_count": 1 00:08:54.541 } 00:08:54.541 03:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.541 03:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76776 00:08:54.541 03:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 76776 ']' 00:08:54.541 03:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 76776 00:08:54.541 03:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:54.541 03:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.541 03:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76776 00:08:54.800 03:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.800 03:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.800 03:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76776' 00:08:54.800 killing process with pid 76776 00:08:54.800 03:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 76776 00:08:54.800 [2024-11-21 03:17:42.133231] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:54.800 03:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 76776 00:08:54.800 [2024-11-21 03:17:42.163802] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:55.060 03:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:55.060 03:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:55.060 03:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xRd9ZcF4pu 00:08:55.060 03:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:55.060 03:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:55.060 03:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:55.060 03:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:55.060 03:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:55.060 00:08:55.060 real 0m3.351s 00:08:55.060 user 0m4.206s 00:08:55.060 sys 0m0.553s 00:08:55.060 ************************************ 00:08:55.060 END TEST raid_read_error_test 00:08:55.060 ************************************ 00:08:55.060 03:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.060 03:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.060 03:17:42 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:55.060 03:17:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:55.060 03:17:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.060 03:17:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:55.060 ************************************ 00:08:55.060 START TEST raid_write_error_test 00:08:55.060 ************************************ 00:08:55.060 03:17:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:55.060 03:17:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:55.060 03:17:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:55.060 03:17:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:55.060 03:17:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:55.060 03:17:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:55.060 03:17:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:55.060 03:17:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:55.060 03:17:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:55.060 03:17:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:55.060 03:17:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:55.060 03:17:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:55.060 03:17:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:55.060 03:17:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:55.060 03:17:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:55.060 03:17:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:55.060 03:17:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:55.060 03:17:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:55.060 03:17:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:55.060 03:17:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:55.060 03:17:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:55.060 03:17:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:55.060 03:17:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Eb5nKVR5cN 00:08:55.060 03:17:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76905 00:08:55.060 03:17:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:55.060 03:17:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76905 00:08:55.060 03:17:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 76905 ']' 00:08:55.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.060 03:17:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.060 03:17:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:55.060 03:17:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.060 03:17:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:55.060 03:17:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.320 [2024-11-21 03:17:42.660539] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:08:55.320 [2024-11-21 03:17:42.660692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76905 ] 00:08:55.321 [2024-11-21 03:17:42.802210] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:55.321 [2024-11-21 03:17:42.839696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.321 [2024-11-21 03:17:42.871984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.579 [2024-11-21 03:17:42.917993] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.579 [2024-11-21 03:17:42.918063] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.147 03:17:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.147 03:17:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:56.147 03:17:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:56.147 03:17:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:56.147 03:17:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.147 03:17:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.147 BaseBdev1_malloc 00:08:56.147 03:17:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.147 03:17:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:56.147 03:17:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.147 03:17:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.147 true 00:08:56.147 03:17:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.147 03:17:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:56.147 03:17:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.147 03:17:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.147 [2024-11-21 03:17:43.589119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:56.147 [2024-11-21 03:17:43.589361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:56.148 [2024-11-21 03:17:43.589411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:56.148 [2024-11-21 03:17:43.589436] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:56.148 [2024-11-21 03:17:43.592769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:56.148 [2024-11-21 03:17:43.592956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:56.148 BaseBdev1 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.148 BaseBdev2_malloc 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.148 true 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.148 [2024-11-21 03:17:43.632997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:56.148 [2024-11-21 03:17:43.633103] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:56.148 [2024-11-21 03:17:43.633131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:56.148 [2024-11-21 03:17:43.633144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:56.148 [2024-11-21 03:17:43.635744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:56.148 [2024-11-21 03:17:43.635888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:56.148 BaseBdev2 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.148 [2024-11-21 03:17:43.645053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:56.148 [2024-11-21 03:17:43.647292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:56.148 [2024-11-21 03:17:43.647526] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:56.148 [2024-11-21 03:17:43.647549] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:56.148 [2024-11-21 03:17:43.647913] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:08:56.148 [2024-11-21 03:17:43.648148] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:56.148 [2024-11-21 03:17:43.648161] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:08:56.148 [2024-11-21 03:17:43.648355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.148 "name": "raid_bdev1", 00:08:56.148 "uuid": "5113a72a-91d0-46b0-b20a-6cad2fd2d5db", 00:08:56.148 "strip_size_kb": 0, 00:08:56.148 "state": "online", 00:08:56.148 "raid_level": "raid1", 00:08:56.148 "superblock": true, 00:08:56.148 "num_base_bdevs": 2, 00:08:56.148 "num_base_bdevs_discovered": 2, 00:08:56.148 "num_base_bdevs_operational": 2, 00:08:56.148 "base_bdevs_list": [ 00:08:56.148 { 00:08:56.148 "name": "BaseBdev1", 00:08:56.148 "uuid": "c33cc439-0312-54e0-bdc9-e6a5727f9cb0", 00:08:56.148 "is_configured": true, 00:08:56.148 "data_offset": 2048, 00:08:56.148 "data_size": 63488 00:08:56.148 }, 00:08:56.148 { 00:08:56.148 "name": "BaseBdev2", 00:08:56.148 "uuid": "c0c291a7-7f39-5310-b28e-f9b2e8a291b2", 00:08:56.148 "is_configured": true, 00:08:56.148 "data_offset": 2048, 00:08:56.148 "data_size": 63488 00:08:56.148 } 00:08:56.148 ] 00:08:56.148 }' 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.148 03:17:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.717 03:17:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:56.717 03:17:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:56.717 [2024-11-21 03:17:44.145602] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:08:57.655 03:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:57.655 03:17:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.655 03:17:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.655 [2024-11-21 03:17:45.081581] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:57.655 [2024-11-21 03:17:45.081787] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:57.655 [2024-11-21 03:17:45.082074] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000067d0 00:08:57.655 03:17:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.655 03:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:57.655 03:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:57.655 03:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:57.655 03:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:57.655 03:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:57.655 03:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:57.655 03:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:57.655 03:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:57.655 03:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:57.655 03:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:57.655 03:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.655 03:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.655 03:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.655 03:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.655 03:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.655 03:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:57.655 03:17:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.655 03:17:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.655 03:17:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.655 03:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.655 "name": "raid_bdev1", 00:08:57.655 "uuid": "5113a72a-91d0-46b0-b20a-6cad2fd2d5db", 00:08:57.655 "strip_size_kb": 0, 00:08:57.655 "state": "online", 00:08:57.655 "raid_level": "raid1", 00:08:57.655 "superblock": true, 00:08:57.655 "num_base_bdevs": 2, 00:08:57.655 "num_base_bdevs_discovered": 1, 00:08:57.655 "num_base_bdevs_operational": 1, 00:08:57.655 "base_bdevs_list": [ 00:08:57.655 { 00:08:57.655 "name": null, 00:08:57.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.655 "is_configured": false, 00:08:57.655 "data_offset": 0, 00:08:57.655 "data_size": 63488 00:08:57.655 }, 00:08:57.655 { 00:08:57.655 "name": "BaseBdev2", 00:08:57.655 "uuid": "c0c291a7-7f39-5310-b28e-f9b2e8a291b2", 00:08:57.655 "is_configured": true, 00:08:57.655 "data_offset": 2048, 00:08:57.655 "data_size": 63488 00:08:57.655 } 00:08:57.655 ] 00:08:57.655 }' 00:08:57.655 03:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.655 03:17:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.223 03:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:58.223 03:17:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.223 03:17:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.223 [2024-11-21 03:17:45.560610] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:58.223 [2024-11-21 03:17:45.560789] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:58.223 [2024-11-21 03:17:45.563948] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:58.223 [2024-11-21 03:17:45.564000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:58.223 [2024-11-21 03:17:45.564179] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:58.223 [2024-11-21 03:17:45.564245] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:08:58.223 { 00:08:58.223 "results": [ 00:08:58.223 { 00:08:58.223 "job": "raid_bdev1", 00:08:58.223 "core_mask": "0x1", 00:08:58.223 "workload": "randrw", 00:08:58.223 "percentage": 50, 00:08:58.223 "status": "finished", 00:08:58.223 "queue_depth": 1, 00:08:58.223 "io_size": 131072, 00:08:58.223 "runtime": 1.412804, 00:08:58.223 "iops": 16945.733449225794, 00:08:58.223 "mibps": 2118.2166811532243, 00:08:58.223 "io_failed": 0, 00:08:58.224 "io_timeout": 0, 00:08:58.224 "avg_latency_us": 55.70566035996573, 00:08:58.224 "min_latency_us": 28.56096923211017, 00:08:58.224 "max_latency_us": 1742.2191231587205 00:08:58.224 } 00:08:58.224 ], 00:08:58.224 "core_count": 1 00:08:58.224 } 00:08:58.224 03:17:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.224 03:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76905 00:08:58.224 03:17:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 76905 ']' 00:08:58.224 03:17:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 76905 00:08:58.224 03:17:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:58.224 03:17:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:58.224 03:17:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76905 00:08:58.224 03:17:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:58.224 03:17:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:58.224 03:17:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76905' 00:08:58.224 killing process with pid 76905 00:08:58.224 03:17:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 76905 00:08:58.224 [2024-11-21 03:17:45.611095] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:58.224 03:17:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 76905 00:08:58.224 [2024-11-21 03:17:45.628554] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:58.483 03:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Eb5nKVR5cN 00:08:58.483 03:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:58.483 03:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:58.483 03:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:58.483 03:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:58.483 03:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:58.483 03:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:58.483 03:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:58.483 00:08:58.483 real 0m3.323s 00:08:58.483 user 0m4.219s 00:08:58.483 sys 0m0.552s 00:08:58.483 03:17:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.483 03:17:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.483 ************************************ 00:08:58.483 END TEST raid_write_error_test 00:08:58.483 ************************************ 00:08:58.483 03:17:45 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:58.483 03:17:45 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:58.483 03:17:45 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:58.483 03:17:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:58.483 03:17:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.483 03:17:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:58.483 ************************************ 00:08:58.483 START TEST raid_state_function_test 00:08:58.483 ************************************ 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:58.483 Process raid pid: 77038 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=77038 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77038' 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 77038 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 77038 ']' 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:58.483 03:17:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.743 [2024-11-21 03:17:46.053008] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:08:58.743 [2024-11-21 03:17:46.053242] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:58.743 [2024-11-21 03:17:46.195984] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:58.743 [2024-11-21 03:17:46.219682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.743 [2024-11-21 03:17:46.260499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.002 [2024-11-21 03:17:46.339418] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:59.002 [2024-11-21 03:17:46.339461] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:59.572 03:17:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:59.572 03:17:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:59.572 03:17:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:59.572 03:17:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.572 03:17:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.572 [2024-11-21 03:17:46.901003] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:59.572 [2024-11-21 03:17:46.901072] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:59.572 [2024-11-21 03:17:46.901086] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:59.572 [2024-11-21 03:17:46.901094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:59.572 [2024-11-21 03:17:46.901109] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:59.572 [2024-11-21 03:17:46.901116] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:59.572 03:17:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.572 03:17:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:59.572 03:17:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.572 03:17:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.572 03:17:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.572 03:17:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.572 03:17:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.572 03:17:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.573 03:17:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.573 03:17:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.573 03:17:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.573 03:17:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.573 03:17:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.573 03:17:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.573 03:17:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.573 03:17:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.573 03:17:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.573 "name": "Existed_Raid", 00:08:59.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.573 "strip_size_kb": 64, 00:08:59.573 "state": "configuring", 00:08:59.573 "raid_level": "raid0", 00:08:59.573 "superblock": false, 00:08:59.573 "num_base_bdevs": 3, 00:08:59.573 "num_base_bdevs_discovered": 0, 00:08:59.573 "num_base_bdevs_operational": 3, 00:08:59.573 "base_bdevs_list": [ 00:08:59.573 { 00:08:59.573 "name": "BaseBdev1", 00:08:59.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.573 "is_configured": false, 00:08:59.573 "data_offset": 0, 00:08:59.573 "data_size": 0 00:08:59.573 }, 00:08:59.573 { 00:08:59.573 "name": "BaseBdev2", 00:08:59.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.573 "is_configured": false, 00:08:59.573 "data_offset": 0, 00:08:59.573 "data_size": 0 00:08:59.573 }, 00:08:59.573 { 00:08:59.573 "name": "BaseBdev3", 00:08:59.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.573 "is_configured": false, 00:08:59.573 "data_offset": 0, 00:08:59.573 "data_size": 0 00:08:59.573 } 00:08:59.573 ] 00:08:59.573 }' 00:08:59.573 03:17:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.573 03:17:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.832 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:59.832 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.833 [2024-11-21 03:17:47.281006] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:59.833 [2024-11-21 03:17:47.281113] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.833 [2024-11-21 03:17:47.289039] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:59.833 [2024-11-21 03:17:47.289123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:59.833 [2024-11-21 03:17:47.289157] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:59.833 [2024-11-21 03:17:47.289178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:59.833 [2024-11-21 03:17:47.289199] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:59.833 [2024-11-21 03:17:47.289218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.833 [2024-11-21 03:17:47.312466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:59.833 BaseBdev1 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.833 [ 00:08:59.833 { 00:08:59.833 "name": "BaseBdev1", 00:08:59.833 "aliases": [ 00:08:59.833 "2185c956-bc98-4d2f-8bb5-602cc6aa6a23" 00:08:59.833 ], 00:08:59.833 "product_name": "Malloc disk", 00:08:59.833 "block_size": 512, 00:08:59.833 "num_blocks": 65536, 00:08:59.833 "uuid": "2185c956-bc98-4d2f-8bb5-602cc6aa6a23", 00:08:59.833 "assigned_rate_limits": { 00:08:59.833 "rw_ios_per_sec": 0, 00:08:59.833 "rw_mbytes_per_sec": 0, 00:08:59.833 "r_mbytes_per_sec": 0, 00:08:59.833 "w_mbytes_per_sec": 0 00:08:59.833 }, 00:08:59.833 "claimed": true, 00:08:59.833 "claim_type": "exclusive_write", 00:08:59.833 "zoned": false, 00:08:59.833 "supported_io_types": { 00:08:59.833 "read": true, 00:08:59.833 "write": true, 00:08:59.833 "unmap": true, 00:08:59.833 "flush": true, 00:08:59.833 "reset": true, 00:08:59.833 "nvme_admin": false, 00:08:59.833 "nvme_io": false, 00:08:59.833 "nvme_io_md": false, 00:08:59.833 "write_zeroes": true, 00:08:59.833 "zcopy": true, 00:08:59.833 "get_zone_info": false, 00:08:59.833 "zone_management": false, 00:08:59.833 "zone_append": false, 00:08:59.833 "compare": false, 00:08:59.833 "compare_and_write": false, 00:08:59.833 "abort": true, 00:08:59.833 "seek_hole": false, 00:08:59.833 "seek_data": false, 00:08:59.833 "copy": true, 00:08:59.833 "nvme_iov_md": false 00:08:59.833 }, 00:08:59.833 "memory_domains": [ 00:08:59.833 { 00:08:59.833 "dma_device_id": "system", 00:08:59.833 "dma_device_type": 1 00:08:59.833 }, 00:08:59.833 { 00:08:59.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.833 "dma_device_type": 2 00:08:59.833 } 00:08:59.833 ], 00:08:59.833 "driver_specific": {} 00:08:59.833 } 00:08:59.833 ] 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.833 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.833 "name": "Existed_Raid", 00:08:59.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.833 "strip_size_kb": 64, 00:08:59.833 "state": "configuring", 00:08:59.833 "raid_level": "raid0", 00:08:59.833 "superblock": false, 00:08:59.833 "num_base_bdevs": 3, 00:08:59.833 "num_base_bdevs_discovered": 1, 00:08:59.833 "num_base_bdevs_operational": 3, 00:08:59.833 "base_bdevs_list": [ 00:08:59.833 { 00:08:59.833 "name": "BaseBdev1", 00:08:59.833 "uuid": "2185c956-bc98-4d2f-8bb5-602cc6aa6a23", 00:08:59.833 "is_configured": true, 00:08:59.833 "data_offset": 0, 00:08:59.833 "data_size": 65536 00:08:59.833 }, 00:08:59.833 { 00:08:59.833 "name": "BaseBdev2", 00:08:59.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.833 "is_configured": false, 00:08:59.833 "data_offset": 0, 00:08:59.833 "data_size": 0 00:08:59.833 }, 00:08:59.833 { 00:08:59.833 "name": "BaseBdev3", 00:08:59.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.833 "is_configured": false, 00:08:59.833 "data_offset": 0, 00:08:59.833 "data_size": 0 00:08:59.833 } 00:08:59.833 ] 00:08:59.833 }' 00:09:00.093 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.093 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.353 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:00.353 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.353 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.353 [2024-11-21 03:17:47.808687] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:00.353 [2024-11-21 03:17:47.808829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:00.353 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.353 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:00.353 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.353 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.353 [2024-11-21 03:17:47.820715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:00.353 [2024-11-21 03:17:47.823135] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:00.353 [2024-11-21 03:17:47.823214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:00.353 [2024-11-21 03:17:47.823249] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:00.353 [2024-11-21 03:17:47.823272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:00.353 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.353 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:00.353 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:00.353 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:00.353 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.353 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.353 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.353 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.353 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.353 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.353 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.353 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.353 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.353 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.353 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.353 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.353 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.353 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.353 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.353 "name": "Existed_Raid", 00:09:00.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.353 "strip_size_kb": 64, 00:09:00.353 "state": "configuring", 00:09:00.353 "raid_level": "raid0", 00:09:00.353 "superblock": false, 00:09:00.353 "num_base_bdevs": 3, 00:09:00.353 "num_base_bdevs_discovered": 1, 00:09:00.353 "num_base_bdevs_operational": 3, 00:09:00.353 "base_bdevs_list": [ 00:09:00.353 { 00:09:00.353 "name": "BaseBdev1", 00:09:00.353 "uuid": "2185c956-bc98-4d2f-8bb5-602cc6aa6a23", 00:09:00.353 "is_configured": true, 00:09:00.353 "data_offset": 0, 00:09:00.353 "data_size": 65536 00:09:00.353 }, 00:09:00.353 { 00:09:00.353 "name": "BaseBdev2", 00:09:00.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.353 "is_configured": false, 00:09:00.353 "data_offset": 0, 00:09:00.353 "data_size": 0 00:09:00.353 }, 00:09:00.353 { 00:09:00.353 "name": "BaseBdev3", 00:09:00.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.353 "is_configured": false, 00:09:00.353 "data_offset": 0, 00:09:00.353 "data_size": 0 00:09:00.353 } 00:09:00.353 ] 00:09:00.353 }' 00:09:00.353 03:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.353 03:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.948 [2024-11-21 03:17:48.282137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:00.948 BaseBdev2 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.948 [ 00:09:00.948 { 00:09:00.948 "name": "BaseBdev2", 00:09:00.948 "aliases": [ 00:09:00.948 "df554e3a-1047-4380-ba86-88f0017b3224" 00:09:00.948 ], 00:09:00.948 "product_name": "Malloc disk", 00:09:00.948 "block_size": 512, 00:09:00.948 "num_blocks": 65536, 00:09:00.948 "uuid": "df554e3a-1047-4380-ba86-88f0017b3224", 00:09:00.948 "assigned_rate_limits": { 00:09:00.948 "rw_ios_per_sec": 0, 00:09:00.948 "rw_mbytes_per_sec": 0, 00:09:00.948 "r_mbytes_per_sec": 0, 00:09:00.948 "w_mbytes_per_sec": 0 00:09:00.948 }, 00:09:00.948 "claimed": true, 00:09:00.948 "claim_type": "exclusive_write", 00:09:00.948 "zoned": false, 00:09:00.948 "supported_io_types": { 00:09:00.948 "read": true, 00:09:00.948 "write": true, 00:09:00.948 "unmap": true, 00:09:00.948 "flush": true, 00:09:00.948 "reset": true, 00:09:00.948 "nvme_admin": false, 00:09:00.948 "nvme_io": false, 00:09:00.948 "nvme_io_md": false, 00:09:00.948 "write_zeroes": true, 00:09:00.948 "zcopy": true, 00:09:00.948 "get_zone_info": false, 00:09:00.948 "zone_management": false, 00:09:00.948 "zone_append": false, 00:09:00.948 "compare": false, 00:09:00.948 "compare_and_write": false, 00:09:00.948 "abort": true, 00:09:00.948 "seek_hole": false, 00:09:00.948 "seek_data": false, 00:09:00.948 "copy": true, 00:09:00.948 "nvme_iov_md": false 00:09:00.948 }, 00:09:00.948 "memory_domains": [ 00:09:00.948 { 00:09:00.948 "dma_device_id": "system", 00:09:00.948 "dma_device_type": 1 00:09:00.948 }, 00:09:00.948 { 00:09:00.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.948 "dma_device_type": 2 00:09:00.948 } 00:09:00.948 ], 00:09:00.948 "driver_specific": {} 00:09:00.948 } 00:09:00.948 ] 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.948 "name": "Existed_Raid", 00:09:00.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.948 "strip_size_kb": 64, 00:09:00.948 "state": "configuring", 00:09:00.948 "raid_level": "raid0", 00:09:00.948 "superblock": false, 00:09:00.948 "num_base_bdevs": 3, 00:09:00.948 "num_base_bdevs_discovered": 2, 00:09:00.948 "num_base_bdevs_operational": 3, 00:09:00.948 "base_bdevs_list": [ 00:09:00.948 { 00:09:00.948 "name": "BaseBdev1", 00:09:00.948 "uuid": "2185c956-bc98-4d2f-8bb5-602cc6aa6a23", 00:09:00.948 "is_configured": true, 00:09:00.948 "data_offset": 0, 00:09:00.948 "data_size": 65536 00:09:00.948 }, 00:09:00.948 { 00:09:00.948 "name": "BaseBdev2", 00:09:00.948 "uuid": "df554e3a-1047-4380-ba86-88f0017b3224", 00:09:00.948 "is_configured": true, 00:09:00.948 "data_offset": 0, 00:09:00.948 "data_size": 65536 00:09:00.948 }, 00:09:00.948 { 00:09:00.948 "name": "BaseBdev3", 00:09:00.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.948 "is_configured": false, 00:09:00.948 "data_offset": 0, 00:09:00.948 "data_size": 0 00:09:00.948 } 00:09:00.948 ] 00:09:00.948 }' 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.948 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.207 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:01.207 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.207 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.207 [2024-11-21 03:17:48.764715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:01.207 [2024-11-21 03:17:48.764892] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:01.207 [2024-11-21 03:17:48.764913] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:01.207 [2024-11-21 03:17:48.765407] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:01.207 [2024-11-21 03:17:48.765681] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:01.207 [2024-11-21 03:17:48.765706] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:09:01.207 [2024-11-21 03:17:48.766104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:01.207 BaseBdev3 00:09:01.207 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.207 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:01.207 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:01.207 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:01.207 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:01.207 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:01.207 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:01.207 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:01.207 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.207 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.467 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.467 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:01.467 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.467 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.467 [ 00:09:01.467 { 00:09:01.467 "name": "BaseBdev3", 00:09:01.467 "aliases": [ 00:09:01.467 "ea29a44e-ad5a-4a78-9152-0368f1ed2243" 00:09:01.467 ], 00:09:01.467 "product_name": "Malloc disk", 00:09:01.467 "block_size": 512, 00:09:01.467 "num_blocks": 65536, 00:09:01.467 "uuid": "ea29a44e-ad5a-4a78-9152-0368f1ed2243", 00:09:01.467 "assigned_rate_limits": { 00:09:01.467 "rw_ios_per_sec": 0, 00:09:01.467 "rw_mbytes_per_sec": 0, 00:09:01.467 "r_mbytes_per_sec": 0, 00:09:01.467 "w_mbytes_per_sec": 0 00:09:01.467 }, 00:09:01.467 "claimed": true, 00:09:01.467 "claim_type": "exclusive_write", 00:09:01.467 "zoned": false, 00:09:01.467 "supported_io_types": { 00:09:01.467 "read": true, 00:09:01.467 "write": true, 00:09:01.467 "unmap": true, 00:09:01.467 "flush": true, 00:09:01.467 "reset": true, 00:09:01.467 "nvme_admin": false, 00:09:01.467 "nvme_io": false, 00:09:01.467 "nvme_io_md": false, 00:09:01.467 "write_zeroes": true, 00:09:01.467 "zcopy": true, 00:09:01.467 "get_zone_info": false, 00:09:01.467 "zone_management": false, 00:09:01.467 "zone_append": false, 00:09:01.467 "compare": false, 00:09:01.467 "compare_and_write": false, 00:09:01.467 "abort": true, 00:09:01.467 "seek_hole": false, 00:09:01.467 "seek_data": false, 00:09:01.467 "copy": true, 00:09:01.467 "nvme_iov_md": false 00:09:01.467 }, 00:09:01.467 "memory_domains": [ 00:09:01.467 { 00:09:01.467 "dma_device_id": "system", 00:09:01.467 "dma_device_type": 1 00:09:01.467 }, 00:09:01.467 { 00:09:01.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.467 "dma_device_type": 2 00:09:01.467 } 00:09:01.467 ], 00:09:01.467 "driver_specific": {} 00:09:01.467 } 00:09:01.467 ] 00:09:01.467 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.467 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:01.467 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:01.467 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:01.467 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:01.467 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.467 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:01.467 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:01.467 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.467 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.467 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.467 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.467 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.467 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.467 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.467 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.467 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.467 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.467 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.467 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.467 "name": "Existed_Raid", 00:09:01.467 "uuid": "9de378f5-5604-46c7-ac45-0727c12799fa", 00:09:01.467 "strip_size_kb": 64, 00:09:01.467 "state": "online", 00:09:01.467 "raid_level": "raid0", 00:09:01.467 "superblock": false, 00:09:01.467 "num_base_bdevs": 3, 00:09:01.467 "num_base_bdevs_discovered": 3, 00:09:01.467 "num_base_bdevs_operational": 3, 00:09:01.467 "base_bdevs_list": [ 00:09:01.467 { 00:09:01.467 "name": "BaseBdev1", 00:09:01.467 "uuid": "2185c956-bc98-4d2f-8bb5-602cc6aa6a23", 00:09:01.467 "is_configured": true, 00:09:01.467 "data_offset": 0, 00:09:01.467 "data_size": 65536 00:09:01.467 }, 00:09:01.467 { 00:09:01.467 "name": "BaseBdev2", 00:09:01.467 "uuid": "df554e3a-1047-4380-ba86-88f0017b3224", 00:09:01.467 "is_configured": true, 00:09:01.467 "data_offset": 0, 00:09:01.467 "data_size": 65536 00:09:01.467 }, 00:09:01.467 { 00:09:01.467 "name": "BaseBdev3", 00:09:01.467 "uuid": "ea29a44e-ad5a-4a78-9152-0368f1ed2243", 00:09:01.467 "is_configured": true, 00:09:01.467 "data_offset": 0, 00:09:01.467 "data_size": 65536 00:09:01.467 } 00:09:01.467 ] 00:09:01.467 }' 00:09:01.467 03:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.467 03:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.727 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:01.727 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:01.727 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:01.727 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:01.727 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:01.727 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:01.727 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:01.727 03:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.727 03:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.727 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:01.727 [2024-11-21 03:17:49.217343] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:01.727 03:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.727 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:01.727 "name": "Existed_Raid", 00:09:01.727 "aliases": [ 00:09:01.727 "9de378f5-5604-46c7-ac45-0727c12799fa" 00:09:01.727 ], 00:09:01.727 "product_name": "Raid Volume", 00:09:01.727 "block_size": 512, 00:09:01.727 "num_blocks": 196608, 00:09:01.727 "uuid": "9de378f5-5604-46c7-ac45-0727c12799fa", 00:09:01.727 "assigned_rate_limits": { 00:09:01.727 "rw_ios_per_sec": 0, 00:09:01.727 "rw_mbytes_per_sec": 0, 00:09:01.727 "r_mbytes_per_sec": 0, 00:09:01.727 "w_mbytes_per_sec": 0 00:09:01.727 }, 00:09:01.727 "claimed": false, 00:09:01.727 "zoned": false, 00:09:01.727 "supported_io_types": { 00:09:01.727 "read": true, 00:09:01.727 "write": true, 00:09:01.727 "unmap": true, 00:09:01.727 "flush": true, 00:09:01.727 "reset": true, 00:09:01.727 "nvme_admin": false, 00:09:01.727 "nvme_io": false, 00:09:01.727 "nvme_io_md": false, 00:09:01.727 "write_zeroes": true, 00:09:01.727 "zcopy": false, 00:09:01.727 "get_zone_info": false, 00:09:01.727 "zone_management": false, 00:09:01.727 "zone_append": false, 00:09:01.727 "compare": false, 00:09:01.727 "compare_and_write": false, 00:09:01.727 "abort": false, 00:09:01.727 "seek_hole": false, 00:09:01.727 "seek_data": false, 00:09:01.727 "copy": false, 00:09:01.727 "nvme_iov_md": false 00:09:01.727 }, 00:09:01.727 "memory_domains": [ 00:09:01.727 { 00:09:01.727 "dma_device_id": "system", 00:09:01.727 "dma_device_type": 1 00:09:01.727 }, 00:09:01.727 { 00:09:01.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.727 "dma_device_type": 2 00:09:01.727 }, 00:09:01.727 { 00:09:01.727 "dma_device_id": "system", 00:09:01.727 "dma_device_type": 1 00:09:01.727 }, 00:09:01.727 { 00:09:01.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.727 "dma_device_type": 2 00:09:01.727 }, 00:09:01.727 { 00:09:01.728 "dma_device_id": "system", 00:09:01.728 "dma_device_type": 1 00:09:01.728 }, 00:09:01.728 { 00:09:01.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.728 "dma_device_type": 2 00:09:01.728 } 00:09:01.728 ], 00:09:01.728 "driver_specific": { 00:09:01.728 "raid": { 00:09:01.728 "uuid": "9de378f5-5604-46c7-ac45-0727c12799fa", 00:09:01.728 "strip_size_kb": 64, 00:09:01.728 "state": "online", 00:09:01.728 "raid_level": "raid0", 00:09:01.728 "superblock": false, 00:09:01.728 "num_base_bdevs": 3, 00:09:01.728 "num_base_bdevs_discovered": 3, 00:09:01.728 "num_base_bdevs_operational": 3, 00:09:01.728 "base_bdevs_list": [ 00:09:01.728 { 00:09:01.728 "name": "BaseBdev1", 00:09:01.728 "uuid": "2185c956-bc98-4d2f-8bb5-602cc6aa6a23", 00:09:01.728 "is_configured": true, 00:09:01.728 "data_offset": 0, 00:09:01.728 "data_size": 65536 00:09:01.728 }, 00:09:01.728 { 00:09:01.728 "name": "BaseBdev2", 00:09:01.728 "uuid": "df554e3a-1047-4380-ba86-88f0017b3224", 00:09:01.728 "is_configured": true, 00:09:01.728 "data_offset": 0, 00:09:01.728 "data_size": 65536 00:09:01.728 }, 00:09:01.728 { 00:09:01.728 "name": "BaseBdev3", 00:09:01.728 "uuid": "ea29a44e-ad5a-4a78-9152-0368f1ed2243", 00:09:01.728 "is_configured": true, 00:09:01.728 "data_offset": 0, 00:09:01.728 "data_size": 65536 00:09:01.728 } 00:09:01.728 ] 00:09:01.728 } 00:09:01.728 } 00:09:01.728 }' 00:09:01.728 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:01.987 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:01.987 BaseBdev2 00:09:01.987 BaseBdev3' 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.988 [2024-11-21 03:17:49.469106] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:01.988 [2024-11-21 03:17:49.469146] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:01.988 [2024-11-21 03:17:49.469235] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.988 "name": "Existed_Raid", 00:09:01.988 "uuid": "9de378f5-5604-46c7-ac45-0727c12799fa", 00:09:01.988 "strip_size_kb": 64, 00:09:01.988 "state": "offline", 00:09:01.988 "raid_level": "raid0", 00:09:01.988 "superblock": false, 00:09:01.988 "num_base_bdevs": 3, 00:09:01.988 "num_base_bdevs_discovered": 2, 00:09:01.988 "num_base_bdevs_operational": 2, 00:09:01.988 "base_bdevs_list": [ 00:09:01.988 { 00:09:01.988 "name": null, 00:09:01.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.988 "is_configured": false, 00:09:01.988 "data_offset": 0, 00:09:01.988 "data_size": 65536 00:09:01.988 }, 00:09:01.988 { 00:09:01.988 "name": "BaseBdev2", 00:09:01.988 "uuid": "df554e3a-1047-4380-ba86-88f0017b3224", 00:09:01.988 "is_configured": true, 00:09:01.988 "data_offset": 0, 00:09:01.988 "data_size": 65536 00:09:01.988 }, 00:09:01.988 { 00:09:01.988 "name": "BaseBdev3", 00:09:01.988 "uuid": "ea29a44e-ad5a-4a78-9152-0368f1ed2243", 00:09:01.988 "is_configured": true, 00:09:01.988 "data_offset": 0, 00:09:01.988 "data_size": 65536 00:09:01.988 } 00:09:01.988 ] 00:09:01.988 }' 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.988 03:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.565 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:02.565 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:02.565 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.565 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:02.565 03:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.565 03:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.565 03:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.565 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:02.565 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:02.565 03:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:02.565 03:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.565 03:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.565 [2024-11-21 03:17:49.986855] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:02.565 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.565 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:02.565 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:02.565 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.565 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:02.565 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.565 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.565 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.565 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:02.565 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:02.565 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:02.565 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.565 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.565 [2024-11-21 03:17:50.064685] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:02.565 [2024-11-21 03:17:50.064764] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:09:02.565 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.565 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:02.565 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:02.565 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.565 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.565 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.565 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:02.565 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.825 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:02.825 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:02.825 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:02.825 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:02.825 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:02.825 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:02.825 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.825 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.825 BaseBdev2 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.826 [ 00:09:02.826 { 00:09:02.826 "name": "BaseBdev2", 00:09:02.826 "aliases": [ 00:09:02.826 "cd0345a8-7885-4b6a-95cc-d3984128259c" 00:09:02.826 ], 00:09:02.826 "product_name": "Malloc disk", 00:09:02.826 "block_size": 512, 00:09:02.826 "num_blocks": 65536, 00:09:02.826 "uuid": "cd0345a8-7885-4b6a-95cc-d3984128259c", 00:09:02.826 "assigned_rate_limits": { 00:09:02.826 "rw_ios_per_sec": 0, 00:09:02.826 "rw_mbytes_per_sec": 0, 00:09:02.826 "r_mbytes_per_sec": 0, 00:09:02.826 "w_mbytes_per_sec": 0 00:09:02.826 }, 00:09:02.826 "claimed": false, 00:09:02.826 "zoned": false, 00:09:02.826 "supported_io_types": { 00:09:02.826 "read": true, 00:09:02.826 "write": true, 00:09:02.826 "unmap": true, 00:09:02.826 "flush": true, 00:09:02.826 "reset": true, 00:09:02.826 "nvme_admin": false, 00:09:02.826 "nvme_io": false, 00:09:02.826 "nvme_io_md": false, 00:09:02.826 "write_zeroes": true, 00:09:02.826 "zcopy": true, 00:09:02.826 "get_zone_info": false, 00:09:02.826 "zone_management": false, 00:09:02.826 "zone_append": false, 00:09:02.826 "compare": false, 00:09:02.826 "compare_and_write": false, 00:09:02.826 "abort": true, 00:09:02.826 "seek_hole": false, 00:09:02.826 "seek_data": false, 00:09:02.826 "copy": true, 00:09:02.826 "nvme_iov_md": false 00:09:02.826 }, 00:09:02.826 "memory_domains": [ 00:09:02.826 { 00:09:02.826 "dma_device_id": "system", 00:09:02.826 "dma_device_type": 1 00:09:02.826 }, 00:09:02.826 { 00:09:02.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.826 "dma_device_type": 2 00:09:02.826 } 00:09:02.826 ], 00:09:02.826 "driver_specific": {} 00:09:02.826 } 00:09:02.826 ] 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.826 BaseBdev3 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.826 [ 00:09:02.826 { 00:09:02.826 "name": "BaseBdev3", 00:09:02.826 "aliases": [ 00:09:02.826 "3b663397-e762-488a-87d4-74cccde7eab3" 00:09:02.826 ], 00:09:02.826 "product_name": "Malloc disk", 00:09:02.826 "block_size": 512, 00:09:02.826 "num_blocks": 65536, 00:09:02.826 "uuid": "3b663397-e762-488a-87d4-74cccde7eab3", 00:09:02.826 "assigned_rate_limits": { 00:09:02.826 "rw_ios_per_sec": 0, 00:09:02.826 "rw_mbytes_per_sec": 0, 00:09:02.826 "r_mbytes_per_sec": 0, 00:09:02.826 "w_mbytes_per_sec": 0 00:09:02.826 }, 00:09:02.826 "claimed": false, 00:09:02.826 "zoned": false, 00:09:02.826 "supported_io_types": { 00:09:02.826 "read": true, 00:09:02.826 "write": true, 00:09:02.826 "unmap": true, 00:09:02.826 "flush": true, 00:09:02.826 "reset": true, 00:09:02.826 "nvme_admin": false, 00:09:02.826 "nvme_io": false, 00:09:02.826 "nvme_io_md": false, 00:09:02.826 "write_zeroes": true, 00:09:02.826 "zcopy": true, 00:09:02.826 "get_zone_info": false, 00:09:02.826 "zone_management": false, 00:09:02.826 "zone_append": false, 00:09:02.826 "compare": false, 00:09:02.826 "compare_and_write": false, 00:09:02.826 "abort": true, 00:09:02.826 "seek_hole": false, 00:09:02.826 "seek_data": false, 00:09:02.826 "copy": true, 00:09:02.826 "nvme_iov_md": false 00:09:02.826 }, 00:09:02.826 "memory_domains": [ 00:09:02.826 { 00:09:02.826 "dma_device_id": "system", 00:09:02.826 "dma_device_type": 1 00:09:02.826 }, 00:09:02.826 { 00:09:02.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.826 "dma_device_type": 2 00:09:02.826 } 00:09:02.826 ], 00:09:02.826 "driver_specific": {} 00:09:02.826 } 00:09:02.826 ] 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:02.826 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:02.827 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.827 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.827 [2024-11-21 03:17:50.263541] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:02.827 [2024-11-21 03:17:50.263661] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:02.827 [2024-11-21 03:17:50.263717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:02.827 [2024-11-21 03:17:50.266168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:02.827 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.827 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:02.827 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.827 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.827 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.827 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.827 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.827 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.827 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.827 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.827 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.827 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.827 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.827 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.827 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.827 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.827 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.827 "name": "Existed_Raid", 00:09:02.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.827 "strip_size_kb": 64, 00:09:02.827 "state": "configuring", 00:09:02.827 "raid_level": "raid0", 00:09:02.827 "superblock": false, 00:09:02.827 "num_base_bdevs": 3, 00:09:02.827 "num_base_bdevs_discovered": 2, 00:09:02.827 "num_base_bdevs_operational": 3, 00:09:02.827 "base_bdevs_list": [ 00:09:02.827 { 00:09:02.827 "name": "BaseBdev1", 00:09:02.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.827 "is_configured": false, 00:09:02.827 "data_offset": 0, 00:09:02.827 "data_size": 0 00:09:02.827 }, 00:09:02.827 { 00:09:02.827 "name": "BaseBdev2", 00:09:02.827 "uuid": "cd0345a8-7885-4b6a-95cc-d3984128259c", 00:09:02.827 "is_configured": true, 00:09:02.827 "data_offset": 0, 00:09:02.827 "data_size": 65536 00:09:02.827 }, 00:09:02.827 { 00:09:02.827 "name": "BaseBdev3", 00:09:02.827 "uuid": "3b663397-e762-488a-87d4-74cccde7eab3", 00:09:02.827 "is_configured": true, 00:09:02.827 "data_offset": 0, 00:09:02.827 "data_size": 65536 00:09:02.827 } 00:09:02.827 ] 00:09:02.827 }' 00:09:02.827 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.827 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.398 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:03.398 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.398 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.398 [2024-11-21 03:17:50.699676] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:03.398 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.398 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:03.398 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.398 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.398 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.398 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.398 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.398 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.398 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.398 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.398 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.398 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.398 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.398 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.398 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.398 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.398 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.398 "name": "Existed_Raid", 00:09:03.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.398 "strip_size_kb": 64, 00:09:03.398 "state": "configuring", 00:09:03.398 "raid_level": "raid0", 00:09:03.398 "superblock": false, 00:09:03.398 "num_base_bdevs": 3, 00:09:03.398 "num_base_bdevs_discovered": 1, 00:09:03.398 "num_base_bdevs_operational": 3, 00:09:03.398 "base_bdevs_list": [ 00:09:03.398 { 00:09:03.398 "name": "BaseBdev1", 00:09:03.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.398 "is_configured": false, 00:09:03.398 "data_offset": 0, 00:09:03.398 "data_size": 0 00:09:03.398 }, 00:09:03.398 { 00:09:03.398 "name": null, 00:09:03.398 "uuid": "cd0345a8-7885-4b6a-95cc-d3984128259c", 00:09:03.398 "is_configured": false, 00:09:03.398 "data_offset": 0, 00:09:03.398 "data_size": 65536 00:09:03.398 }, 00:09:03.398 { 00:09:03.398 "name": "BaseBdev3", 00:09:03.398 "uuid": "3b663397-e762-488a-87d4-74cccde7eab3", 00:09:03.398 "is_configured": true, 00:09:03.398 "data_offset": 0, 00:09:03.398 "data_size": 65536 00:09:03.398 } 00:09:03.398 ] 00:09:03.398 }' 00:09:03.398 03:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.398 03:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.657 [2024-11-21 03:17:51.169569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:03.657 BaseBdev1 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.657 [ 00:09:03.657 { 00:09:03.657 "name": "BaseBdev1", 00:09:03.657 "aliases": [ 00:09:03.657 "bc69768a-b4d7-4eb6-99db-697d72994d15" 00:09:03.657 ], 00:09:03.657 "product_name": "Malloc disk", 00:09:03.657 "block_size": 512, 00:09:03.657 "num_blocks": 65536, 00:09:03.657 "uuid": "bc69768a-b4d7-4eb6-99db-697d72994d15", 00:09:03.657 "assigned_rate_limits": { 00:09:03.657 "rw_ios_per_sec": 0, 00:09:03.657 "rw_mbytes_per_sec": 0, 00:09:03.657 "r_mbytes_per_sec": 0, 00:09:03.657 "w_mbytes_per_sec": 0 00:09:03.657 }, 00:09:03.657 "claimed": true, 00:09:03.657 "claim_type": "exclusive_write", 00:09:03.657 "zoned": false, 00:09:03.657 "supported_io_types": { 00:09:03.657 "read": true, 00:09:03.657 "write": true, 00:09:03.657 "unmap": true, 00:09:03.657 "flush": true, 00:09:03.657 "reset": true, 00:09:03.657 "nvme_admin": false, 00:09:03.657 "nvme_io": false, 00:09:03.657 "nvme_io_md": false, 00:09:03.657 "write_zeroes": true, 00:09:03.657 "zcopy": true, 00:09:03.657 "get_zone_info": false, 00:09:03.657 "zone_management": false, 00:09:03.657 "zone_append": false, 00:09:03.657 "compare": false, 00:09:03.657 "compare_and_write": false, 00:09:03.657 "abort": true, 00:09:03.657 "seek_hole": false, 00:09:03.657 "seek_data": false, 00:09:03.657 "copy": true, 00:09:03.657 "nvme_iov_md": false 00:09:03.657 }, 00:09:03.657 "memory_domains": [ 00:09:03.657 { 00:09:03.657 "dma_device_id": "system", 00:09:03.657 "dma_device_type": 1 00:09:03.657 }, 00:09:03.657 { 00:09:03.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.657 "dma_device_type": 2 00:09:03.657 } 00:09:03.657 ], 00:09:03.657 "driver_specific": {} 00:09:03.657 } 00:09:03.657 ] 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.657 03:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.917 03:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.917 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.917 "name": "Existed_Raid", 00:09:03.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.917 "strip_size_kb": 64, 00:09:03.917 "state": "configuring", 00:09:03.917 "raid_level": "raid0", 00:09:03.917 "superblock": false, 00:09:03.917 "num_base_bdevs": 3, 00:09:03.917 "num_base_bdevs_discovered": 2, 00:09:03.917 "num_base_bdevs_operational": 3, 00:09:03.917 "base_bdevs_list": [ 00:09:03.917 { 00:09:03.917 "name": "BaseBdev1", 00:09:03.917 "uuid": "bc69768a-b4d7-4eb6-99db-697d72994d15", 00:09:03.917 "is_configured": true, 00:09:03.917 "data_offset": 0, 00:09:03.917 "data_size": 65536 00:09:03.917 }, 00:09:03.917 { 00:09:03.917 "name": null, 00:09:03.917 "uuid": "cd0345a8-7885-4b6a-95cc-d3984128259c", 00:09:03.917 "is_configured": false, 00:09:03.917 "data_offset": 0, 00:09:03.917 "data_size": 65536 00:09:03.917 }, 00:09:03.917 { 00:09:03.917 "name": "BaseBdev3", 00:09:03.917 "uuid": "3b663397-e762-488a-87d4-74cccde7eab3", 00:09:03.917 "is_configured": true, 00:09:03.917 "data_offset": 0, 00:09:03.917 "data_size": 65536 00:09:03.917 } 00:09:03.917 ] 00:09:03.917 }' 00:09:03.917 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.917 03:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.178 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.178 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:04.178 03:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.178 03:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.178 03:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.178 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:04.178 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:04.178 03:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.178 03:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.178 [2024-11-21 03:17:51.725913] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:04.178 03:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.178 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:04.178 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.178 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.178 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.178 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.178 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.178 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.178 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.178 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.178 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.178 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.178 03:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.178 03:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.178 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.437 03:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.437 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.437 "name": "Existed_Raid", 00:09:04.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.437 "strip_size_kb": 64, 00:09:04.437 "state": "configuring", 00:09:04.437 "raid_level": "raid0", 00:09:04.437 "superblock": false, 00:09:04.437 "num_base_bdevs": 3, 00:09:04.437 "num_base_bdevs_discovered": 1, 00:09:04.437 "num_base_bdevs_operational": 3, 00:09:04.437 "base_bdevs_list": [ 00:09:04.437 { 00:09:04.437 "name": "BaseBdev1", 00:09:04.437 "uuid": "bc69768a-b4d7-4eb6-99db-697d72994d15", 00:09:04.437 "is_configured": true, 00:09:04.437 "data_offset": 0, 00:09:04.437 "data_size": 65536 00:09:04.437 }, 00:09:04.437 { 00:09:04.437 "name": null, 00:09:04.437 "uuid": "cd0345a8-7885-4b6a-95cc-d3984128259c", 00:09:04.437 "is_configured": false, 00:09:04.437 "data_offset": 0, 00:09:04.437 "data_size": 65536 00:09:04.437 }, 00:09:04.437 { 00:09:04.437 "name": null, 00:09:04.437 "uuid": "3b663397-e762-488a-87d4-74cccde7eab3", 00:09:04.437 "is_configured": false, 00:09:04.437 "data_offset": 0, 00:09:04.437 "data_size": 65536 00:09:04.437 } 00:09:04.437 ] 00:09:04.437 }' 00:09:04.437 03:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.437 03:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.697 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.697 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:04.697 03:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.697 03:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.697 03:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.956 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:04.956 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:04.956 03:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.956 03:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.956 [2024-11-21 03:17:52.282117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:04.956 03:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.956 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:04.956 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.956 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.956 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.956 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.956 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.956 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.956 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.956 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.956 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.956 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.956 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.956 03:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.956 03:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.956 03:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.956 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.956 "name": "Existed_Raid", 00:09:04.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.956 "strip_size_kb": 64, 00:09:04.956 "state": "configuring", 00:09:04.956 "raid_level": "raid0", 00:09:04.956 "superblock": false, 00:09:04.956 "num_base_bdevs": 3, 00:09:04.956 "num_base_bdevs_discovered": 2, 00:09:04.956 "num_base_bdevs_operational": 3, 00:09:04.956 "base_bdevs_list": [ 00:09:04.956 { 00:09:04.956 "name": "BaseBdev1", 00:09:04.956 "uuid": "bc69768a-b4d7-4eb6-99db-697d72994d15", 00:09:04.956 "is_configured": true, 00:09:04.956 "data_offset": 0, 00:09:04.956 "data_size": 65536 00:09:04.956 }, 00:09:04.956 { 00:09:04.956 "name": null, 00:09:04.956 "uuid": "cd0345a8-7885-4b6a-95cc-d3984128259c", 00:09:04.956 "is_configured": false, 00:09:04.956 "data_offset": 0, 00:09:04.956 "data_size": 65536 00:09:04.956 }, 00:09:04.956 { 00:09:04.956 "name": "BaseBdev3", 00:09:04.956 "uuid": "3b663397-e762-488a-87d4-74cccde7eab3", 00:09:04.956 "is_configured": true, 00:09:04.956 "data_offset": 0, 00:09:04.956 "data_size": 65536 00:09:04.956 } 00:09:04.956 ] 00:09:04.956 }' 00:09:04.956 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.956 03:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.215 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.215 03:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.215 03:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.215 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:05.473 03:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.473 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:05.473 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:05.473 03:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.473 03:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.473 [2024-11-21 03:17:52.834329] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:05.473 03:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.473 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:05.473 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.473 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.473 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.473 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.473 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.473 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.473 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.473 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.473 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.473 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.473 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.473 03:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.473 03:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.473 03:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.473 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.473 "name": "Existed_Raid", 00:09:05.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.473 "strip_size_kb": 64, 00:09:05.473 "state": "configuring", 00:09:05.473 "raid_level": "raid0", 00:09:05.473 "superblock": false, 00:09:05.473 "num_base_bdevs": 3, 00:09:05.473 "num_base_bdevs_discovered": 1, 00:09:05.473 "num_base_bdevs_operational": 3, 00:09:05.473 "base_bdevs_list": [ 00:09:05.473 { 00:09:05.473 "name": null, 00:09:05.473 "uuid": "bc69768a-b4d7-4eb6-99db-697d72994d15", 00:09:05.473 "is_configured": false, 00:09:05.473 "data_offset": 0, 00:09:05.473 "data_size": 65536 00:09:05.473 }, 00:09:05.473 { 00:09:05.473 "name": null, 00:09:05.473 "uuid": "cd0345a8-7885-4b6a-95cc-d3984128259c", 00:09:05.473 "is_configured": false, 00:09:05.473 "data_offset": 0, 00:09:05.473 "data_size": 65536 00:09:05.473 }, 00:09:05.473 { 00:09:05.473 "name": "BaseBdev3", 00:09:05.473 "uuid": "3b663397-e762-488a-87d4-74cccde7eab3", 00:09:05.473 "is_configured": true, 00:09:05.473 "data_offset": 0, 00:09:05.473 "data_size": 65536 00:09:05.473 } 00:09:05.473 ] 00:09:05.473 }' 00:09:05.473 03:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.473 03:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.735 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:05.735 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.735 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.735 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.994 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.994 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:05.994 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:05.994 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.994 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.994 [2024-11-21 03:17:53.339755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:05.994 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.994 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:05.994 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.994 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.994 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.994 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.994 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.994 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.994 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.994 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.994 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.994 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.994 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.994 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.994 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.994 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.994 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.994 "name": "Existed_Raid", 00:09:05.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.994 "strip_size_kb": 64, 00:09:05.994 "state": "configuring", 00:09:05.994 "raid_level": "raid0", 00:09:05.994 "superblock": false, 00:09:05.994 "num_base_bdevs": 3, 00:09:05.994 "num_base_bdevs_discovered": 2, 00:09:05.994 "num_base_bdevs_operational": 3, 00:09:05.994 "base_bdevs_list": [ 00:09:05.994 { 00:09:05.994 "name": null, 00:09:05.994 "uuid": "bc69768a-b4d7-4eb6-99db-697d72994d15", 00:09:05.994 "is_configured": false, 00:09:05.994 "data_offset": 0, 00:09:05.994 "data_size": 65536 00:09:05.994 }, 00:09:05.994 { 00:09:05.994 "name": "BaseBdev2", 00:09:05.994 "uuid": "cd0345a8-7885-4b6a-95cc-d3984128259c", 00:09:05.994 "is_configured": true, 00:09:05.994 "data_offset": 0, 00:09:05.994 "data_size": 65536 00:09:05.994 }, 00:09:05.994 { 00:09:05.994 "name": "BaseBdev3", 00:09:05.994 "uuid": "3b663397-e762-488a-87d4-74cccde7eab3", 00:09:05.994 "is_configured": true, 00:09:05.994 "data_offset": 0, 00:09:05.994 "data_size": 65536 00:09:05.994 } 00:09:05.994 ] 00:09:05.994 }' 00:09:05.994 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.994 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.254 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.254 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.254 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.254 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:06.254 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.254 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:06.254 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:06.254 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.254 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.254 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.254 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.513 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bc69768a-b4d7-4eb6-99db-697d72994d15 00:09:06.513 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.513 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.513 [2024-11-21 03:17:53.837940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:06.513 [2024-11-21 03:17:53.838033] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:06.513 [2024-11-21 03:17:53.838045] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:06.513 [2024-11-21 03:17:53.838412] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:09:06.513 [2024-11-21 03:17:53.838585] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:06.513 [2024-11-21 03:17:53.838603] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:06.513 [2024-11-21 03:17:53.838842] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:06.513 NewBaseBdev 00:09:06.513 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.513 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:06.513 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:06.513 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:06.513 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:06.513 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:06.513 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:06.513 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:06.513 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.513 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.513 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.513 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:06.513 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.513 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.513 [ 00:09:06.513 { 00:09:06.513 "name": "NewBaseBdev", 00:09:06.513 "aliases": [ 00:09:06.513 "bc69768a-b4d7-4eb6-99db-697d72994d15" 00:09:06.513 ], 00:09:06.513 "product_name": "Malloc disk", 00:09:06.513 "block_size": 512, 00:09:06.513 "num_blocks": 65536, 00:09:06.513 "uuid": "bc69768a-b4d7-4eb6-99db-697d72994d15", 00:09:06.513 "assigned_rate_limits": { 00:09:06.513 "rw_ios_per_sec": 0, 00:09:06.513 "rw_mbytes_per_sec": 0, 00:09:06.513 "r_mbytes_per_sec": 0, 00:09:06.513 "w_mbytes_per_sec": 0 00:09:06.513 }, 00:09:06.513 "claimed": true, 00:09:06.513 "claim_type": "exclusive_write", 00:09:06.513 "zoned": false, 00:09:06.513 "supported_io_types": { 00:09:06.513 "read": true, 00:09:06.513 "write": true, 00:09:06.513 "unmap": true, 00:09:06.513 "flush": true, 00:09:06.513 "reset": true, 00:09:06.513 "nvme_admin": false, 00:09:06.513 "nvme_io": false, 00:09:06.513 "nvme_io_md": false, 00:09:06.513 "write_zeroes": true, 00:09:06.513 "zcopy": true, 00:09:06.513 "get_zone_info": false, 00:09:06.513 "zone_management": false, 00:09:06.513 "zone_append": false, 00:09:06.513 "compare": false, 00:09:06.513 "compare_and_write": false, 00:09:06.513 "abort": true, 00:09:06.513 "seek_hole": false, 00:09:06.513 "seek_data": false, 00:09:06.513 "copy": true, 00:09:06.513 "nvme_iov_md": false 00:09:06.513 }, 00:09:06.513 "memory_domains": [ 00:09:06.513 { 00:09:06.514 "dma_device_id": "system", 00:09:06.514 "dma_device_type": 1 00:09:06.514 }, 00:09:06.514 { 00:09:06.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.514 "dma_device_type": 2 00:09:06.514 } 00:09:06.514 ], 00:09:06.514 "driver_specific": {} 00:09:06.514 } 00:09:06.514 ] 00:09:06.514 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.514 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:06.514 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:06.514 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.514 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:06.514 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.514 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.514 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.514 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.514 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.514 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.514 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.514 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.514 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.514 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.514 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.514 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.514 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.514 "name": "Existed_Raid", 00:09:06.514 "uuid": "1d29fc26-3654-4144-a5c0-51b63a6fbc83", 00:09:06.514 "strip_size_kb": 64, 00:09:06.514 "state": "online", 00:09:06.514 "raid_level": "raid0", 00:09:06.514 "superblock": false, 00:09:06.514 "num_base_bdevs": 3, 00:09:06.514 "num_base_bdevs_discovered": 3, 00:09:06.514 "num_base_bdevs_operational": 3, 00:09:06.514 "base_bdevs_list": [ 00:09:06.514 { 00:09:06.514 "name": "NewBaseBdev", 00:09:06.514 "uuid": "bc69768a-b4d7-4eb6-99db-697d72994d15", 00:09:06.514 "is_configured": true, 00:09:06.514 "data_offset": 0, 00:09:06.514 "data_size": 65536 00:09:06.514 }, 00:09:06.514 { 00:09:06.514 "name": "BaseBdev2", 00:09:06.514 "uuid": "cd0345a8-7885-4b6a-95cc-d3984128259c", 00:09:06.514 "is_configured": true, 00:09:06.514 "data_offset": 0, 00:09:06.514 "data_size": 65536 00:09:06.514 }, 00:09:06.514 { 00:09:06.514 "name": "BaseBdev3", 00:09:06.514 "uuid": "3b663397-e762-488a-87d4-74cccde7eab3", 00:09:06.514 "is_configured": true, 00:09:06.514 "data_offset": 0, 00:09:06.514 "data_size": 65536 00:09:06.514 } 00:09:06.514 ] 00:09:06.514 }' 00:09:06.514 03:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.514 03:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.774 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:06.774 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:06.774 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:06.774 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:06.774 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:06.774 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:06.774 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:06.774 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:06.774 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.774 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.774 [2024-11-21 03:17:54.294636] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:06.774 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.774 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:06.774 "name": "Existed_Raid", 00:09:06.774 "aliases": [ 00:09:06.774 "1d29fc26-3654-4144-a5c0-51b63a6fbc83" 00:09:06.774 ], 00:09:06.774 "product_name": "Raid Volume", 00:09:06.774 "block_size": 512, 00:09:06.774 "num_blocks": 196608, 00:09:06.774 "uuid": "1d29fc26-3654-4144-a5c0-51b63a6fbc83", 00:09:06.774 "assigned_rate_limits": { 00:09:06.774 "rw_ios_per_sec": 0, 00:09:06.774 "rw_mbytes_per_sec": 0, 00:09:06.774 "r_mbytes_per_sec": 0, 00:09:06.774 "w_mbytes_per_sec": 0 00:09:06.774 }, 00:09:06.774 "claimed": false, 00:09:06.774 "zoned": false, 00:09:06.774 "supported_io_types": { 00:09:06.774 "read": true, 00:09:06.774 "write": true, 00:09:06.774 "unmap": true, 00:09:06.774 "flush": true, 00:09:06.774 "reset": true, 00:09:06.774 "nvme_admin": false, 00:09:06.774 "nvme_io": false, 00:09:06.774 "nvme_io_md": false, 00:09:06.774 "write_zeroes": true, 00:09:06.774 "zcopy": false, 00:09:06.774 "get_zone_info": false, 00:09:06.774 "zone_management": false, 00:09:06.774 "zone_append": false, 00:09:06.774 "compare": false, 00:09:06.774 "compare_and_write": false, 00:09:06.774 "abort": false, 00:09:06.774 "seek_hole": false, 00:09:06.774 "seek_data": false, 00:09:06.774 "copy": false, 00:09:06.774 "nvme_iov_md": false 00:09:06.774 }, 00:09:06.774 "memory_domains": [ 00:09:06.774 { 00:09:06.774 "dma_device_id": "system", 00:09:06.774 "dma_device_type": 1 00:09:06.774 }, 00:09:06.774 { 00:09:06.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.774 "dma_device_type": 2 00:09:06.774 }, 00:09:06.774 { 00:09:06.774 "dma_device_id": "system", 00:09:06.774 "dma_device_type": 1 00:09:06.774 }, 00:09:06.774 { 00:09:06.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.774 "dma_device_type": 2 00:09:06.774 }, 00:09:06.774 { 00:09:06.774 "dma_device_id": "system", 00:09:06.774 "dma_device_type": 1 00:09:06.774 }, 00:09:06.774 { 00:09:06.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.774 "dma_device_type": 2 00:09:06.774 } 00:09:06.774 ], 00:09:06.774 "driver_specific": { 00:09:06.774 "raid": { 00:09:06.774 "uuid": "1d29fc26-3654-4144-a5c0-51b63a6fbc83", 00:09:06.774 "strip_size_kb": 64, 00:09:06.774 "state": "online", 00:09:06.774 "raid_level": "raid0", 00:09:06.774 "superblock": false, 00:09:06.774 "num_base_bdevs": 3, 00:09:06.774 "num_base_bdevs_discovered": 3, 00:09:06.774 "num_base_bdevs_operational": 3, 00:09:06.774 "base_bdevs_list": [ 00:09:06.774 { 00:09:06.774 "name": "NewBaseBdev", 00:09:06.774 "uuid": "bc69768a-b4d7-4eb6-99db-697d72994d15", 00:09:06.774 "is_configured": true, 00:09:06.774 "data_offset": 0, 00:09:06.774 "data_size": 65536 00:09:06.774 }, 00:09:06.774 { 00:09:06.774 "name": "BaseBdev2", 00:09:06.774 "uuid": "cd0345a8-7885-4b6a-95cc-d3984128259c", 00:09:06.774 "is_configured": true, 00:09:06.774 "data_offset": 0, 00:09:06.774 "data_size": 65536 00:09:06.774 }, 00:09:06.774 { 00:09:06.774 "name": "BaseBdev3", 00:09:06.774 "uuid": "3b663397-e762-488a-87d4-74cccde7eab3", 00:09:06.774 "is_configured": true, 00:09:06.774 "data_offset": 0, 00:09:06.774 "data_size": 65536 00:09:06.774 } 00:09:06.774 ] 00:09:06.774 } 00:09:06.774 } 00:09:06.774 }' 00:09:06.774 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:07.033 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:07.033 BaseBdev2 00:09:07.033 BaseBdev3' 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.034 [2024-11-21 03:17:54.538307] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:07.034 [2024-11-21 03:17:54.538364] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:07.034 [2024-11-21 03:17:54.538472] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:07.034 [2024-11-21 03:17:54.538543] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:07.034 [2024-11-21 03:17:54.538554] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 77038 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 77038 ']' 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 77038 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77038 00:09:07.034 killing process with pid 77038 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77038' 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 77038 00:09:07.034 [2024-11-21 03:17:54.573899] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:07.034 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 77038 00:09:07.295 [2024-11-21 03:17:54.637319] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:07.553 03:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:07.553 00:09:07.553 real 0m9.037s 00:09:07.553 user 0m15.166s 00:09:07.553 sys 0m1.830s 00:09:07.553 ************************************ 00:09:07.553 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.553 03:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.553 END TEST raid_state_function_test 00:09:07.553 ************************************ 00:09:07.553 03:17:55 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:09:07.553 03:17:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:07.553 03:17:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.553 03:17:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:07.553 ************************************ 00:09:07.553 START TEST raid_state_function_test_sb 00:09:07.553 ************************************ 00:09:07.553 03:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:09:07.553 03:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:07.553 03:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:07.553 03:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:07.553 03:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:07.553 03:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:07.553 03:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.553 03:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:07.553 03:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:07.553 03:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.553 03:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:07.554 03:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:07.554 03:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.554 03:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:07.554 03:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:07.554 03:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.554 03:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:07.554 03:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:07.554 03:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:07.554 03:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:07.554 03:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:07.554 03:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:07.554 03:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:07.554 03:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:07.554 03:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:07.554 03:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:07.554 03:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:07.554 Process raid pid: 77648 00:09:07.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.554 03:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=77648 00:09:07.554 03:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77648' 00:09:07.554 03:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 77648 00:09:07.554 03:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:07.554 03:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77648 ']' 00:09:07.554 03:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.554 03:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:07.554 03:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.554 03:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:07.554 03:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.812 [2024-11-21 03:17:55.131995] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:09:07.812 [2024-11-21 03:17:55.132179] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.812 [2024-11-21 03:17:55.278337] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:07.812 [2024-11-21 03:17:55.309467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.812 [2024-11-21 03:17:55.344248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.115 [2024-11-21 03:17:55.391169] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.115 [2024-11-21 03:17:55.391217] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.682 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.682 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:08.682 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:08.682 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.683 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.683 [2024-11-21 03:17:56.081183] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:08.683 [2024-11-21 03:17:56.081244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:08.683 [2024-11-21 03:17:56.081258] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:08.683 [2024-11-21 03:17:56.081267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:08.683 [2024-11-21 03:17:56.081282] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:08.683 [2024-11-21 03:17:56.081290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:08.683 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.683 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:08.683 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.683 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.683 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.683 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.683 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.683 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.683 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.683 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.683 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.683 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.683 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.683 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.683 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.683 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.683 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.683 "name": "Existed_Raid", 00:09:08.683 "uuid": "ad19c032-e0bb-4e43-a771-875b9562a6c8", 00:09:08.683 "strip_size_kb": 64, 00:09:08.683 "state": "configuring", 00:09:08.683 "raid_level": "raid0", 00:09:08.683 "superblock": true, 00:09:08.683 "num_base_bdevs": 3, 00:09:08.683 "num_base_bdevs_discovered": 0, 00:09:08.683 "num_base_bdevs_operational": 3, 00:09:08.683 "base_bdevs_list": [ 00:09:08.683 { 00:09:08.683 "name": "BaseBdev1", 00:09:08.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.683 "is_configured": false, 00:09:08.683 "data_offset": 0, 00:09:08.683 "data_size": 0 00:09:08.683 }, 00:09:08.683 { 00:09:08.683 "name": "BaseBdev2", 00:09:08.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.683 "is_configured": false, 00:09:08.683 "data_offset": 0, 00:09:08.683 "data_size": 0 00:09:08.683 }, 00:09:08.683 { 00:09:08.683 "name": "BaseBdev3", 00:09:08.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.683 "is_configured": false, 00:09:08.683 "data_offset": 0, 00:09:08.683 "data_size": 0 00:09:08.683 } 00:09:08.683 ] 00:09:08.683 }' 00:09:08.683 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.683 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.252 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:09.252 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.252 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.252 [2024-11-21 03:17:56.517280] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:09.252 [2024-11-21 03:17:56.517406] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:09:09.252 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.252 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:09.252 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.252 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.252 [2024-11-21 03:17:56.529349] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:09.252 [2024-11-21 03:17:56.529461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:09.252 [2024-11-21 03:17:56.529496] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:09.252 [2024-11-21 03:17:56.529521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:09.252 [2024-11-21 03:17:56.529545] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:09.252 [2024-11-21 03:17:56.529567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:09.252 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.252 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:09.252 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.252 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.252 [2024-11-21 03:17:56.546756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.252 BaseBdev1 00:09:09.252 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.252 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:09.252 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:09.252 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:09.252 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:09.253 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:09.253 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:09.253 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:09.253 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.253 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.253 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.253 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:09.253 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.253 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.253 [ 00:09:09.253 { 00:09:09.253 "name": "BaseBdev1", 00:09:09.253 "aliases": [ 00:09:09.253 "681c36a7-dcd9-4f76-9e72-574860ea0a86" 00:09:09.253 ], 00:09:09.253 "product_name": "Malloc disk", 00:09:09.253 "block_size": 512, 00:09:09.253 "num_blocks": 65536, 00:09:09.253 "uuid": "681c36a7-dcd9-4f76-9e72-574860ea0a86", 00:09:09.253 "assigned_rate_limits": { 00:09:09.253 "rw_ios_per_sec": 0, 00:09:09.253 "rw_mbytes_per_sec": 0, 00:09:09.253 "r_mbytes_per_sec": 0, 00:09:09.253 "w_mbytes_per_sec": 0 00:09:09.253 }, 00:09:09.253 "claimed": true, 00:09:09.253 "claim_type": "exclusive_write", 00:09:09.253 "zoned": false, 00:09:09.253 "supported_io_types": { 00:09:09.253 "read": true, 00:09:09.253 "write": true, 00:09:09.253 "unmap": true, 00:09:09.253 "flush": true, 00:09:09.253 "reset": true, 00:09:09.253 "nvme_admin": false, 00:09:09.253 "nvme_io": false, 00:09:09.253 "nvme_io_md": false, 00:09:09.253 "write_zeroes": true, 00:09:09.253 "zcopy": true, 00:09:09.253 "get_zone_info": false, 00:09:09.253 "zone_management": false, 00:09:09.253 "zone_append": false, 00:09:09.253 "compare": false, 00:09:09.253 "compare_and_write": false, 00:09:09.253 "abort": true, 00:09:09.253 "seek_hole": false, 00:09:09.253 "seek_data": false, 00:09:09.253 "copy": true, 00:09:09.253 "nvme_iov_md": false 00:09:09.253 }, 00:09:09.253 "memory_domains": [ 00:09:09.253 { 00:09:09.253 "dma_device_id": "system", 00:09:09.253 "dma_device_type": 1 00:09:09.253 }, 00:09:09.253 { 00:09:09.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.253 "dma_device_type": 2 00:09:09.253 } 00:09:09.253 ], 00:09:09.253 "driver_specific": {} 00:09:09.253 } 00:09:09.253 ] 00:09:09.253 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.253 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:09.253 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:09.253 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.253 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.253 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.253 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.253 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.253 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.253 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.253 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.253 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.253 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.253 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.253 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.253 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.253 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.253 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.253 "name": "Existed_Raid", 00:09:09.253 "uuid": "f4b2961d-c4f2-42a0-a5ec-f31b8b3fed1a", 00:09:09.253 "strip_size_kb": 64, 00:09:09.253 "state": "configuring", 00:09:09.253 "raid_level": "raid0", 00:09:09.253 "superblock": true, 00:09:09.253 "num_base_bdevs": 3, 00:09:09.253 "num_base_bdevs_discovered": 1, 00:09:09.253 "num_base_bdevs_operational": 3, 00:09:09.253 "base_bdevs_list": [ 00:09:09.253 { 00:09:09.253 "name": "BaseBdev1", 00:09:09.253 "uuid": "681c36a7-dcd9-4f76-9e72-574860ea0a86", 00:09:09.253 "is_configured": true, 00:09:09.253 "data_offset": 2048, 00:09:09.253 "data_size": 63488 00:09:09.253 }, 00:09:09.253 { 00:09:09.253 "name": "BaseBdev2", 00:09:09.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.253 "is_configured": false, 00:09:09.253 "data_offset": 0, 00:09:09.253 "data_size": 0 00:09:09.253 }, 00:09:09.253 { 00:09:09.253 "name": "BaseBdev3", 00:09:09.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.253 "is_configured": false, 00:09:09.253 "data_offset": 0, 00:09:09.253 "data_size": 0 00:09:09.253 } 00:09:09.253 ] 00:09:09.253 }' 00:09:09.253 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.253 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.512 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:09.512 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.512 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.512 [2024-11-21 03:17:56.983072] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:09.512 [2024-11-21 03:17:56.983199] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:09.513 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.513 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:09.513 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.513 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.513 [2024-11-21 03:17:56.991140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.513 [2024-11-21 03:17:56.993417] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:09.513 [2024-11-21 03:17:56.993515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:09.513 [2024-11-21 03:17:56.993554] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:09.513 [2024-11-21 03:17:56.993581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:09.513 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.513 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:09.513 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:09.513 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:09.513 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.513 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.513 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.513 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.513 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.513 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.513 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.513 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.513 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.513 03:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.513 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.513 03:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.513 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.513 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.513 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.513 "name": "Existed_Raid", 00:09:09.513 "uuid": "f1597735-733e-4e1a-827f-985539630c19", 00:09:09.513 "strip_size_kb": 64, 00:09:09.513 "state": "configuring", 00:09:09.513 "raid_level": "raid0", 00:09:09.513 "superblock": true, 00:09:09.513 "num_base_bdevs": 3, 00:09:09.513 "num_base_bdevs_discovered": 1, 00:09:09.513 "num_base_bdevs_operational": 3, 00:09:09.513 "base_bdevs_list": [ 00:09:09.513 { 00:09:09.513 "name": "BaseBdev1", 00:09:09.513 "uuid": "681c36a7-dcd9-4f76-9e72-574860ea0a86", 00:09:09.513 "is_configured": true, 00:09:09.513 "data_offset": 2048, 00:09:09.513 "data_size": 63488 00:09:09.513 }, 00:09:09.513 { 00:09:09.513 "name": "BaseBdev2", 00:09:09.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.513 "is_configured": false, 00:09:09.513 "data_offset": 0, 00:09:09.513 "data_size": 0 00:09:09.513 }, 00:09:09.513 { 00:09:09.513 "name": "BaseBdev3", 00:09:09.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.513 "is_configured": false, 00:09:09.513 "data_offset": 0, 00:09:09.513 "data_size": 0 00:09:09.513 } 00:09:09.513 ] 00:09:09.513 }' 00:09:09.513 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.513 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.080 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:10.080 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.080 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.080 [2024-11-21 03:17:57.466702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:10.080 BaseBdev2 00:09:10.080 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.080 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:10.080 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:10.080 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:10.080 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:10.080 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:10.080 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:10.080 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:10.080 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.080 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.080 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.081 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:10.081 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.081 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.081 [ 00:09:10.081 { 00:09:10.081 "name": "BaseBdev2", 00:09:10.081 "aliases": [ 00:09:10.081 "5cc4b0ce-081c-4824-87de-3d6f1693ec9c" 00:09:10.081 ], 00:09:10.081 "product_name": "Malloc disk", 00:09:10.081 "block_size": 512, 00:09:10.081 "num_blocks": 65536, 00:09:10.081 "uuid": "5cc4b0ce-081c-4824-87de-3d6f1693ec9c", 00:09:10.081 "assigned_rate_limits": { 00:09:10.081 "rw_ios_per_sec": 0, 00:09:10.081 "rw_mbytes_per_sec": 0, 00:09:10.081 "r_mbytes_per_sec": 0, 00:09:10.081 "w_mbytes_per_sec": 0 00:09:10.081 }, 00:09:10.081 "claimed": true, 00:09:10.081 "claim_type": "exclusive_write", 00:09:10.081 "zoned": false, 00:09:10.081 "supported_io_types": { 00:09:10.081 "read": true, 00:09:10.081 "write": true, 00:09:10.081 "unmap": true, 00:09:10.081 "flush": true, 00:09:10.081 "reset": true, 00:09:10.081 "nvme_admin": false, 00:09:10.081 "nvme_io": false, 00:09:10.081 "nvme_io_md": false, 00:09:10.081 "write_zeroes": true, 00:09:10.081 "zcopy": true, 00:09:10.081 "get_zone_info": false, 00:09:10.081 "zone_management": false, 00:09:10.081 "zone_append": false, 00:09:10.081 "compare": false, 00:09:10.081 "compare_and_write": false, 00:09:10.081 "abort": true, 00:09:10.081 "seek_hole": false, 00:09:10.081 "seek_data": false, 00:09:10.081 "copy": true, 00:09:10.081 "nvme_iov_md": false 00:09:10.081 }, 00:09:10.081 "memory_domains": [ 00:09:10.081 { 00:09:10.081 "dma_device_id": "system", 00:09:10.081 "dma_device_type": 1 00:09:10.081 }, 00:09:10.081 { 00:09:10.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.081 "dma_device_type": 2 00:09:10.081 } 00:09:10.081 ], 00:09:10.081 "driver_specific": {} 00:09:10.081 } 00:09:10.081 ] 00:09:10.081 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.081 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:10.081 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:10.081 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:10.081 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:10.081 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.081 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.081 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.081 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.081 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.081 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.081 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.081 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.081 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.081 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.081 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.081 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.081 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.081 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.081 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.081 "name": "Existed_Raid", 00:09:10.081 "uuid": "f1597735-733e-4e1a-827f-985539630c19", 00:09:10.081 "strip_size_kb": 64, 00:09:10.081 "state": "configuring", 00:09:10.081 "raid_level": "raid0", 00:09:10.081 "superblock": true, 00:09:10.081 "num_base_bdevs": 3, 00:09:10.081 "num_base_bdevs_discovered": 2, 00:09:10.081 "num_base_bdevs_operational": 3, 00:09:10.081 "base_bdevs_list": [ 00:09:10.081 { 00:09:10.081 "name": "BaseBdev1", 00:09:10.081 "uuid": "681c36a7-dcd9-4f76-9e72-574860ea0a86", 00:09:10.081 "is_configured": true, 00:09:10.081 "data_offset": 2048, 00:09:10.081 "data_size": 63488 00:09:10.081 }, 00:09:10.081 { 00:09:10.081 "name": "BaseBdev2", 00:09:10.081 "uuid": "5cc4b0ce-081c-4824-87de-3d6f1693ec9c", 00:09:10.081 "is_configured": true, 00:09:10.081 "data_offset": 2048, 00:09:10.081 "data_size": 63488 00:09:10.081 }, 00:09:10.081 { 00:09:10.081 "name": "BaseBdev3", 00:09:10.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.081 "is_configured": false, 00:09:10.081 "data_offset": 0, 00:09:10.081 "data_size": 0 00:09:10.081 } 00:09:10.081 ] 00:09:10.081 }' 00:09:10.081 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.081 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.648 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:10.648 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.648 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.648 BaseBdev3 00:09:10.648 [2024-11-21 03:17:57.945521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:10.648 [2024-11-21 03:17:57.945739] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:10.648 [2024-11-21 03:17:57.945754] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:10.648 [2024-11-21 03:17:57.946109] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:10.648 [2024-11-21 03:17:57.946257] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:10.648 [2024-11-21 03:17:57.946272] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:09:10.648 [2024-11-21 03:17:57.946396] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.649 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.649 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:10.649 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:10.649 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:10.649 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:10.649 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:10.649 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:10.649 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:10.649 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.649 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.649 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.649 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:10.649 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.649 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.649 [ 00:09:10.649 { 00:09:10.649 "name": "BaseBdev3", 00:09:10.649 "aliases": [ 00:09:10.649 "4977e520-a397-4b81-8131-bb59320ee201" 00:09:10.649 ], 00:09:10.649 "product_name": "Malloc disk", 00:09:10.649 "block_size": 512, 00:09:10.649 "num_blocks": 65536, 00:09:10.649 "uuid": "4977e520-a397-4b81-8131-bb59320ee201", 00:09:10.649 "assigned_rate_limits": { 00:09:10.649 "rw_ios_per_sec": 0, 00:09:10.649 "rw_mbytes_per_sec": 0, 00:09:10.649 "r_mbytes_per_sec": 0, 00:09:10.649 "w_mbytes_per_sec": 0 00:09:10.649 }, 00:09:10.649 "claimed": true, 00:09:10.649 "claim_type": "exclusive_write", 00:09:10.649 "zoned": false, 00:09:10.649 "supported_io_types": { 00:09:10.649 "read": true, 00:09:10.649 "write": true, 00:09:10.649 "unmap": true, 00:09:10.649 "flush": true, 00:09:10.649 "reset": true, 00:09:10.649 "nvme_admin": false, 00:09:10.649 "nvme_io": false, 00:09:10.649 "nvme_io_md": false, 00:09:10.649 "write_zeroes": true, 00:09:10.649 "zcopy": true, 00:09:10.649 "get_zone_info": false, 00:09:10.649 "zone_management": false, 00:09:10.649 "zone_append": false, 00:09:10.649 "compare": false, 00:09:10.649 "compare_and_write": false, 00:09:10.649 "abort": true, 00:09:10.649 "seek_hole": false, 00:09:10.649 "seek_data": false, 00:09:10.649 "copy": true, 00:09:10.649 "nvme_iov_md": false 00:09:10.649 }, 00:09:10.649 "memory_domains": [ 00:09:10.649 { 00:09:10.649 "dma_device_id": "system", 00:09:10.649 "dma_device_type": 1 00:09:10.649 }, 00:09:10.649 { 00:09:10.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.649 "dma_device_type": 2 00:09:10.649 } 00:09:10.649 ], 00:09:10.649 "driver_specific": {} 00:09:10.649 } 00:09:10.649 ] 00:09:10.649 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.649 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:10.649 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:10.649 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:10.649 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:10.649 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.649 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.649 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.649 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.649 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.649 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.649 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.649 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.649 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.649 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.649 03:17:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.649 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.649 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.649 03:17:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.649 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.649 "name": "Existed_Raid", 00:09:10.649 "uuid": "f1597735-733e-4e1a-827f-985539630c19", 00:09:10.649 "strip_size_kb": 64, 00:09:10.649 "state": "online", 00:09:10.649 "raid_level": "raid0", 00:09:10.649 "superblock": true, 00:09:10.649 "num_base_bdevs": 3, 00:09:10.649 "num_base_bdevs_discovered": 3, 00:09:10.649 "num_base_bdevs_operational": 3, 00:09:10.649 "base_bdevs_list": [ 00:09:10.649 { 00:09:10.649 "name": "BaseBdev1", 00:09:10.649 "uuid": "681c36a7-dcd9-4f76-9e72-574860ea0a86", 00:09:10.649 "is_configured": true, 00:09:10.649 "data_offset": 2048, 00:09:10.649 "data_size": 63488 00:09:10.649 }, 00:09:10.649 { 00:09:10.649 "name": "BaseBdev2", 00:09:10.649 "uuid": "5cc4b0ce-081c-4824-87de-3d6f1693ec9c", 00:09:10.649 "is_configured": true, 00:09:10.649 "data_offset": 2048, 00:09:10.649 "data_size": 63488 00:09:10.649 }, 00:09:10.649 { 00:09:10.649 "name": "BaseBdev3", 00:09:10.649 "uuid": "4977e520-a397-4b81-8131-bb59320ee201", 00:09:10.649 "is_configured": true, 00:09:10.649 "data_offset": 2048, 00:09:10.649 "data_size": 63488 00:09:10.649 } 00:09:10.649 ] 00:09:10.649 }' 00:09:10.649 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.649 03:17:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.907 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:10.907 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:10.907 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:10.907 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:10.907 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:10.907 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:10.907 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:10.907 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:10.907 03:17:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.907 03:17:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.907 [2024-11-21 03:17:58.458246] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:11.165 "name": "Existed_Raid", 00:09:11.165 "aliases": [ 00:09:11.165 "f1597735-733e-4e1a-827f-985539630c19" 00:09:11.165 ], 00:09:11.165 "product_name": "Raid Volume", 00:09:11.165 "block_size": 512, 00:09:11.165 "num_blocks": 190464, 00:09:11.165 "uuid": "f1597735-733e-4e1a-827f-985539630c19", 00:09:11.165 "assigned_rate_limits": { 00:09:11.165 "rw_ios_per_sec": 0, 00:09:11.165 "rw_mbytes_per_sec": 0, 00:09:11.165 "r_mbytes_per_sec": 0, 00:09:11.165 "w_mbytes_per_sec": 0 00:09:11.165 }, 00:09:11.165 "claimed": false, 00:09:11.165 "zoned": false, 00:09:11.165 "supported_io_types": { 00:09:11.165 "read": true, 00:09:11.165 "write": true, 00:09:11.165 "unmap": true, 00:09:11.165 "flush": true, 00:09:11.165 "reset": true, 00:09:11.165 "nvme_admin": false, 00:09:11.165 "nvme_io": false, 00:09:11.165 "nvme_io_md": false, 00:09:11.165 "write_zeroes": true, 00:09:11.165 "zcopy": false, 00:09:11.165 "get_zone_info": false, 00:09:11.165 "zone_management": false, 00:09:11.165 "zone_append": false, 00:09:11.165 "compare": false, 00:09:11.165 "compare_and_write": false, 00:09:11.165 "abort": false, 00:09:11.165 "seek_hole": false, 00:09:11.165 "seek_data": false, 00:09:11.165 "copy": false, 00:09:11.165 "nvme_iov_md": false 00:09:11.165 }, 00:09:11.165 "memory_domains": [ 00:09:11.165 { 00:09:11.165 "dma_device_id": "system", 00:09:11.165 "dma_device_type": 1 00:09:11.165 }, 00:09:11.165 { 00:09:11.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.165 "dma_device_type": 2 00:09:11.165 }, 00:09:11.165 { 00:09:11.165 "dma_device_id": "system", 00:09:11.165 "dma_device_type": 1 00:09:11.165 }, 00:09:11.165 { 00:09:11.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.165 "dma_device_type": 2 00:09:11.165 }, 00:09:11.165 { 00:09:11.165 "dma_device_id": "system", 00:09:11.165 "dma_device_type": 1 00:09:11.165 }, 00:09:11.165 { 00:09:11.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.165 "dma_device_type": 2 00:09:11.165 } 00:09:11.165 ], 00:09:11.165 "driver_specific": { 00:09:11.165 "raid": { 00:09:11.165 "uuid": "f1597735-733e-4e1a-827f-985539630c19", 00:09:11.165 "strip_size_kb": 64, 00:09:11.165 "state": "online", 00:09:11.165 "raid_level": "raid0", 00:09:11.165 "superblock": true, 00:09:11.165 "num_base_bdevs": 3, 00:09:11.165 "num_base_bdevs_discovered": 3, 00:09:11.165 "num_base_bdevs_operational": 3, 00:09:11.165 "base_bdevs_list": [ 00:09:11.165 { 00:09:11.165 "name": "BaseBdev1", 00:09:11.165 "uuid": "681c36a7-dcd9-4f76-9e72-574860ea0a86", 00:09:11.165 "is_configured": true, 00:09:11.165 "data_offset": 2048, 00:09:11.165 "data_size": 63488 00:09:11.165 }, 00:09:11.165 { 00:09:11.165 "name": "BaseBdev2", 00:09:11.165 "uuid": "5cc4b0ce-081c-4824-87de-3d6f1693ec9c", 00:09:11.165 "is_configured": true, 00:09:11.165 "data_offset": 2048, 00:09:11.165 "data_size": 63488 00:09:11.165 }, 00:09:11.165 { 00:09:11.165 "name": "BaseBdev3", 00:09:11.165 "uuid": "4977e520-a397-4b81-8131-bb59320ee201", 00:09:11.165 "is_configured": true, 00:09:11.165 "data_offset": 2048, 00:09:11.165 "data_size": 63488 00:09:11.165 } 00:09:11.165 ] 00:09:11.165 } 00:09:11.165 } 00:09:11.165 }' 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:11.165 BaseBdev2 00:09:11.165 BaseBdev3' 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.165 [2024-11-21 03:17:58.682101] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:11.165 [2024-11-21 03:17:58.682151] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:11.165 [2024-11-21 03:17:58.682229] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.165 03:17:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.423 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.423 "name": "Existed_Raid", 00:09:11.423 "uuid": "f1597735-733e-4e1a-827f-985539630c19", 00:09:11.423 "strip_size_kb": 64, 00:09:11.423 "state": "offline", 00:09:11.423 "raid_level": "raid0", 00:09:11.423 "superblock": true, 00:09:11.423 "num_base_bdevs": 3, 00:09:11.423 "num_base_bdevs_discovered": 2, 00:09:11.423 "num_base_bdevs_operational": 2, 00:09:11.423 "base_bdevs_list": [ 00:09:11.423 { 00:09:11.423 "name": null, 00:09:11.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.423 "is_configured": false, 00:09:11.424 "data_offset": 0, 00:09:11.424 "data_size": 63488 00:09:11.424 }, 00:09:11.424 { 00:09:11.424 "name": "BaseBdev2", 00:09:11.424 "uuid": "5cc4b0ce-081c-4824-87de-3d6f1693ec9c", 00:09:11.424 "is_configured": true, 00:09:11.424 "data_offset": 2048, 00:09:11.424 "data_size": 63488 00:09:11.424 }, 00:09:11.424 { 00:09:11.424 "name": "BaseBdev3", 00:09:11.424 "uuid": "4977e520-a397-4b81-8131-bb59320ee201", 00:09:11.424 "is_configured": true, 00:09:11.424 "data_offset": 2048, 00:09:11.424 "data_size": 63488 00:09:11.424 } 00:09:11.424 ] 00:09:11.424 }' 00:09:11.424 03:17:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.424 03:17:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.683 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:11.683 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:11.683 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.683 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:11.683 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.683 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.683 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.683 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:11.683 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:11.683 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:11.683 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.683 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.683 [2024-11-21 03:17:59.218343] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:11.683 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.683 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:11.683 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:11.683 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.683 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:11.683 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.683 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.942 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.942 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:11.942 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:11.942 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:11.942 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.942 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.942 [2024-11-21 03:17:59.286230] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:11.942 [2024-11-21 03:17:59.286306] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:09:11.942 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.942 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:11.942 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:11.942 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:11.942 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.942 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.942 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.942 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.942 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:11.942 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:11.942 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:11.942 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:11.942 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:11.942 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:11.942 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.942 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.942 BaseBdev2 00:09:11.942 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.942 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:11.942 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:11.942 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:11.942 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:11.942 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:11.942 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:11.942 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:11.942 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.942 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.942 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.943 [ 00:09:11.943 { 00:09:11.943 "name": "BaseBdev2", 00:09:11.943 "aliases": [ 00:09:11.943 "ddd70436-cbc0-4e80-9531-7c2151c031ab" 00:09:11.943 ], 00:09:11.943 "product_name": "Malloc disk", 00:09:11.943 "block_size": 512, 00:09:11.943 "num_blocks": 65536, 00:09:11.943 "uuid": "ddd70436-cbc0-4e80-9531-7c2151c031ab", 00:09:11.943 "assigned_rate_limits": { 00:09:11.943 "rw_ios_per_sec": 0, 00:09:11.943 "rw_mbytes_per_sec": 0, 00:09:11.943 "r_mbytes_per_sec": 0, 00:09:11.943 "w_mbytes_per_sec": 0 00:09:11.943 }, 00:09:11.943 "claimed": false, 00:09:11.943 "zoned": false, 00:09:11.943 "supported_io_types": { 00:09:11.943 "read": true, 00:09:11.943 "write": true, 00:09:11.943 "unmap": true, 00:09:11.943 "flush": true, 00:09:11.943 "reset": true, 00:09:11.943 "nvme_admin": false, 00:09:11.943 "nvme_io": false, 00:09:11.943 "nvme_io_md": false, 00:09:11.943 "write_zeroes": true, 00:09:11.943 "zcopy": true, 00:09:11.943 "get_zone_info": false, 00:09:11.943 "zone_management": false, 00:09:11.943 "zone_append": false, 00:09:11.943 "compare": false, 00:09:11.943 "compare_and_write": false, 00:09:11.943 "abort": true, 00:09:11.943 "seek_hole": false, 00:09:11.943 "seek_data": false, 00:09:11.943 "copy": true, 00:09:11.943 "nvme_iov_md": false 00:09:11.943 }, 00:09:11.943 "memory_domains": [ 00:09:11.943 { 00:09:11.943 "dma_device_id": "system", 00:09:11.943 "dma_device_type": 1 00:09:11.943 }, 00:09:11.943 { 00:09:11.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.943 "dma_device_type": 2 00:09:11.943 } 00:09:11.943 ], 00:09:11.943 "driver_specific": {} 00:09:11.943 } 00:09:11.943 ] 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.943 BaseBdev3 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.943 [ 00:09:11.943 { 00:09:11.943 "name": "BaseBdev3", 00:09:11.943 "aliases": [ 00:09:11.943 "b26542bc-7370-4cb3-8b0d-bb3b88b0ca82" 00:09:11.943 ], 00:09:11.943 "product_name": "Malloc disk", 00:09:11.943 "block_size": 512, 00:09:11.943 "num_blocks": 65536, 00:09:11.943 "uuid": "b26542bc-7370-4cb3-8b0d-bb3b88b0ca82", 00:09:11.943 "assigned_rate_limits": { 00:09:11.943 "rw_ios_per_sec": 0, 00:09:11.943 "rw_mbytes_per_sec": 0, 00:09:11.943 "r_mbytes_per_sec": 0, 00:09:11.943 "w_mbytes_per_sec": 0 00:09:11.943 }, 00:09:11.943 "claimed": false, 00:09:11.943 "zoned": false, 00:09:11.943 "supported_io_types": { 00:09:11.943 "read": true, 00:09:11.943 "write": true, 00:09:11.943 "unmap": true, 00:09:11.943 "flush": true, 00:09:11.943 "reset": true, 00:09:11.943 "nvme_admin": false, 00:09:11.943 "nvme_io": false, 00:09:11.943 "nvme_io_md": false, 00:09:11.943 "write_zeroes": true, 00:09:11.943 "zcopy": true, 00:09:11.943 "get_zone_info": false, 00:09:11.943 "zone_management": false, 00:09:11.943 "zone_append": false, 00:09:11.943 "compare": false, 00:09:11.943 "compare_and_write": false, 00:09:11.943 "abort": true, 00:09:11.943 "seek_hole": false, 00:09:11.943 "seek_data": false, 00:09:11.943 "copy": true, 00:09:11.943 "nvme_iov_md": false 00:09:11.943 }, 00:09:11.943 "memory_domains": [ 00:09:11.943 { 00:09:11.943 "dma_device_id": "system", 00:09:11.943 "dma_device_type": 1 00:09:11.943 }, 00:09:11.943 { 00:09:11.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.943 "dma_device_type": 2 00:09:11.943 } 00:09:11.943 ], 00:09:11.943 "driver_specific": {} 00:09:11.943 } 00:09:11.943 ] 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.943 [2024-11-21 03:17:59.457668] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:11.943 [2024-11-21 03:17:59.457828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:11.943 [2024-11-21 03:17:59.457884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:11.943 [2024-11-21 03:17:59.460233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.943 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.203 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.203 "name": "Existed_Raid", 00:09:12.203 "uuid": "a1a88061-72fc-40cb-a53c-cb7fb882a63e", 00:09:12.203 "strip_size_kb": 64, 00:09:12.203 "state": "configuring", 00:09:12.203 "raid_level": "raid0", 00:09:12.203 "superblock": true, 00:09:12.203 "num_base_bdevs": 3, 00:09:12.203 "num_base_bdevs_discovered": 2, 00:09:12.203 "num_base_bdevs_operational": 3, 00:09:12.203 "base_bdevs_list": [ 00:09:12.203 { 00:09:12.203 "name": "BaseBdev1", 00:09:12.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.203 "is_configured": false, 00:09:12.203 "data_offset": 0, 00:09:12.203 "data_size": 0 00:09:12.203 }, 00:09:12.203 { 00:09:12.203 "name": "BaseBdev2", 00:09:12.203 "uuid": "ddd70436-cbc0-4e80-9531-7c2151c031ab", 00:09:12.203 "is_configured": true, 00:09:12.203 "data_offset": 2048, 00:09:12.203 "data_size": 63488 00:09:12.203 }, 00:09:12.203 { 00:09:12.203 "name": "BaseBdev3", 00:09:12.203 "uuid": "b26542bc-7370-4cb3-8b0d-bb3b88b0ca82", 00:09:12.203 "is_configured": true, 00:09:12.203 "data_offset": 2048, 00:09:12.203 "data_size": 63488 00:09:12.203 } 00:09:12.203 ] 00:09:12.203 }' 00:09:12.203 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.203 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.463 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:12.463 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.463 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.463 [2024-11-21 03:17:59.913830] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:12.463 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.463 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:12.463 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.464 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.464 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:12.464 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.464 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.464 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.464 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.464 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.464 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.464 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.464 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.464 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.464 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.464 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.464 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.464 "name": "Existed_Raid", 00:09:12.464 "uuid": "a1a88061-72fc-40cb-a53c-cb7fb882a63e", 00:09:12.464 "strip_size_kb": 64, 00:09:12.464 "state": "configuring", 00:09:12.464 "raid_level": "raid0", 00:09:12.464 "superblock": true, 00:09:12.464 "num_base_bdevs": 3, 00:09:12.464 "num_base_bdevs_discovered": 1, 00:09:12.464 "num_base_bdevs_operational": 3, 00:09:12.464 "base_bdevs_list": [ 00:09:12.464 { 00:09:12.464 "name": "BaseBdev1", 00:09:12.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.464 "is_configured": false, 00:09:12.464 "data_offset": 0, 00:09:12.464 "data_size": 0 00:09:12.464 }, 00:09:12.464 { 00:09:12.464 "name": null, 00:09:12.464 "uuid": "ddd70436-cbc0-4e80-9531-7c2151c031ab", 00:09:12.464 "is_configured": false, 00:09:12.464 "data_offset": 0, 00:09:12.464 "data_size": 63488 00:09:12.464 }, 00:09:12.464 { 00:09:12.464 "name": "BaseBdev3", 00:09:12.464 "uuid": "b26542bc-7370-4cb3-8b0d-bb3b88b0ca82", 00:09:12.464 "is_configured": true, 00:09:12.464 "data_offset": 2048, 00:09:12.464 "data_size": 63488 00:09:12.464 } 00:09:12.464 ] 00:09:12.464 }' 00:09:12.464 03:17:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.464 03:17:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.030 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.030 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:13.030 03:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.030 03:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.030 03:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.030 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:13.030 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:13.030 03:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.030 03:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.030 BaseBdev1 00:09:13.030 [2024-11-21 03:18:00.377665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:13.030 03:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.030 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:13.030 03:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:13.030 03:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:13.030 03:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:13.031 03:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:13.031 03:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:13.031 03:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:13.031 03:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.031 03:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.031 03:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.031 03:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:13.031 03:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.031 03:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.031 [ 00:09:13.031 { 00:09:13.031 "name": "BaseBdev1", 00:09:13.031 "aliases": [ 00:09:13.031 "e5131a16-3265-4afa-b57d-84c9535b40da" 00:09:13.031 ], 00:09:13.031 "product_name": "Malloc disk", 00:09:13.031 "block_size": 512, 00:09:13.031 "num_blocks": 65536, 00:09:13.031 "uuid": "e5131a16-3265-4afa-b57d-84c9535b40da", 00:09:13.031 "assigned_rate_limits": { 00:09:13.031 "rw_ios_per_sec": 0, 00:09:13.031 "rw_mbytes_per_sec": 0, 00:09:13.031 "r_mbytes_per_sec": 0, 00:09:13.031 "w_mbytes_per_sec": 0 00:09:13.031 }, 00:09:13.031 "claimed": true, 00:09:13.031 "claim_type": "exclusive_write", 00:09:13.031 "zoned": false, 00:09:13.031 "supported_io_types": { 00:09:13.031 "read": true, 00:09:13.031 "write": true, 00:09:13.031 "unmap": true, 00:09:13.031 "flush": true, 00:09:13.031 "reset": true, 00:09:13.031 "nvme_admin": false, 00:09:13.031 "nvme_io": false, 00:09:13.031 "nvme_io_md": false, 00:09:13.031 "write_zeroes": true, 00:09:13.031 "zcopy": true, 00:09:13.031 "get_zone_info": false, 00:09:13.031 "zone_management": false, 00:09:13.031 "zone_append": false, 00:09:13.031 "compare": false, 00:09:13.031 "compare_and_write": false, 00:09:13.031 "abort": true, 00:09:13.031 "seek_hole": false, 00:09:13.031 "seek_data": false, 00:09:13.031 "copy": true, 00:09:13.031 "nvme_iov_md": false 00:09:13.031 }, 00:09:13.031 "memory_domains": [ 00:09:13.031 { 00:09:13.031 "dma_device_id": "system", 00:09:13.031 "dma_device_type": 1 00:09:13.031 }, 00:09:13.031 { 00:09:13.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.031 "dma_device_type": 2 00:09:13.031 } 00:09:13.031 ], 00:09:13.031 "driver_specific": {} 00:09:13.031 } 00:09:13.031 ] 00:09:13.031 03:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.031 03:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:13.031 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:13.031 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.031 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.031 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.031 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.031 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.031 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.031 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.031 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.031 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.031 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.031 03:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.031 03:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.031 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.031 03:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.031 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.031 "name": "Existed_Raid", 00:09:13.031 "uuid": "a1a88061-72fc-40cb-a53c-cb7fb882a63e", 00:09:13.031 "strip_size_kb": 64, 00:09:13.031 "state": "configuring", 00:09:13.031 "raid_level": "raid0", 00:09:13.031 "superblock": true, 00:09:13.031 "num_base_bdevs": 3, 00:09:13.031 "num_base_bdevs_discovered": 2, 00:09:13.031 "num_base_bdevs_operational": 3, 00:09:13.031 "base_bdevs_list": [ 00:09:13.031 { 00:09:13.031 "name": "BaseBdev1", 00:09:13.031 "uuid": "e5131a16-3265-4afa-b57d-84c9535b40da", 00:09:13.031 "is_configured": true, 00:09:13.031 "data_offset": 2048, 00:09:13.031 "data_size": 63488 00:09:13.031 }, 00:09:13.031 { 00:09:13.031 "name": null, 00:09:13.031 "uuid": "ddd70436-cbc0-4e80-9531-7c2151c031ab", 00:09:13.031 "is_configured": false, 00:09:13.031 "data_offset": 0, 00:09:13.031 "data_size": 63488 00:09:13.031 }, 00:09:13.031 { 00:09:13.031 "name": "BaseBdev3", 00:09:13.031 "uuid": "b26542bc-7370-4cb3-8b0d-bb3b88b0ca82", 00:09:13.031 "is_configured": true, 00:09:13.031 "data_offset": 2048, 00:09:13.031 "data_size": 63488 00:09:13.031 } 00:09:13.031 ] 00:09:13.031 }' 00:09:13.031 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.031 03:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.291 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.291 03:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.291 03:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.291 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:13.291 03:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.291 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:13.291 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:13.291 03:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.291 03:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.291 [2024-11-21 03:18:00.854080] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:13.551 03:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.551 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:13.551 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.551 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.551 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.551 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.551 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.551 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.551 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.551 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.551 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.551 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.551 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.551 03:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.551 03:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.551 03:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.551 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.551 "name": "Existed_Raid", 00:09:13.551 "uuid": "a1a88061-72fc-40cb-a53c-cb7fb882a63e", 00:09:13.551 "strip_size_kb": 64, 00:09:13.551 "state": "configuring", 00:09:13.551 "raid_level": "raid0", 00:09:13.551 "superblock": true, 00:09:13.551 "num_base_bdevs": 3, 00:09:13.551 "num_base_bdevs_discovered": 1, 00:09:13.551 "num_base_bdevs_operational": 3, 00:09:13.551 "base_bdevs_list": [ 00:09:13.551 { 00:09:13.551 "name": "BaseBdev1", 00:09:13.551 "uuid": "e5131a16-3265-4afa-b57d-84c9535b40da", 00:09:13.551 "is_configured": true, 00:09:13.551 "data_offset": 2048, 00:09:13.551 "data_size": 63488 00:09:13.551 }, 00:09:13.551 { 00:09:13.551 "name": null, 00:09:13.551 "uuid": "ddd70436-cbc0-4e80-9531-7c2151c031ab", 00:09:13.551 "is_configured": false, 00:09:13.551 "data_offset": 0, 00:09:13.551 "data_size": 63488 00:09:13.551 }, 00:09:13.551 { 00:09:13.551 "name": null, 00:09:13.551 "uuid": "b26542bc-7370-4cb3-8b0d-bb3b88b0ca82", 00:09:13.551 "is_configured": false, 00:09:13.551 "data_offset": 0, 00:09:13.551 "data_size": 63488 00:09:13.551 } 00:09:13.551 ] 00:09:13.551 }' 00:09:13.551 03:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.552 03:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.812 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.812 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:13.812 03:18:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.812 03:18:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.812 03:18:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.812 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:13.812 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:13.812 03:18:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.812 03:18:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.071 [2024-11-21 03:18:01.382423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:14.071 03:18:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.071 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:14.071 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.071 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.071 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.071 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.071 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.071 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.071 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.071 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.071 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.071 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.071 03:18:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.071 03:18:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.071 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.071 03:18:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.071 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.071 "name": "Existed_Raid", 00:09:14.071 "uuid": "a1a88061-72fc-40cb-a53c-cb7fb882a63e", 00:09:14.071 "strip_size_kb": 64, 00:09:14.071 "state": "configuring", 00:09:14.071 "raid_level": "raid0", 00:09:14.071 "superblock": true, 00:09:14.071 "num_base_bdevs": 3, 00:09:14.071 "num_base_bdevs_discovered": 2, 00:09:14.071 "num_base_bdevs_operational": 3, 00:09:14.071 "base_bdevs_list": [ 00:09:14.071 { 00:09:14.071 "name": "BaseBdev1", 00:09:14.071 "uuid": "e5131a16-3265-4afa-b57d-84c9535b40da", 00:09:14.071 "is_configured": true, 00:09:14.071 "data_offset": 2048, 00:09:14.071 "data_size": 63488 00:09:14.071 }, 00:09:14.071 { 00:09:14.071 "name": null, 00:09:14.071 "uuid": "ddd70436-cbc0-4e80-9531-7c2151c031ab", 00:09:14.071 "is_configured": false, 00:09:14.071 "data_offset": 0, 00:09:14.071 "data_size": 63488 00:09:14.072 }, 00:09:14.072 { 00:09:14.072 "name": "BaseBdev3", 00:09:14.072 "uuid": "b26542bc-7370-4cb3-8b0d-bb3b88b0ca82", 00:09:14.072 "is_configured": true, 00:09:14.072 "data_offset": 2048, 00:09:14.072 "data_size": 63488 00:09:14.072 } 00:09:14.072 ] 00:09:14.072 }' 00:09:14.072 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.072 03:18:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.331 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:14.331 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.331 03:18:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.331 03:18:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.331 03:18:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.591 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:14.591 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:14.591 03:18:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.591 03:18:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.591 [2024-11-21 03:18:01.902695] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:14.591 03:18:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.591 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:14.591 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.591 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.591 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.591 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.591 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.591 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.591 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.591 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.591 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.591 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.591 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.591 03:18:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.591 03:18:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.591 03:18:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.591 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.591 "name": "Existed_Raid", 00:09:14.591 "uuid": "a1a88061-72fc-40cb-a53c-cb7fb882a63e", 00:09:14.591 "strip_size_kb": 64, 00:09:14.591 "state": "configuring", 00:09:14.591 "raid_level": "raid0", 00:09:14.591 "superblock": true, 00:09:14.591 "num_base_bdevs": 3, 00:09:14.591 "num_base_bdevs_discovered": 1, 00:09:14.591 "num_base_bdevs_operational": 3, 00:09:14.591 "base_bdevs_list": [ 00:09:14.591 { 00:09:14.591 "name": null, 00:09:14.591 "uuid": "e5131a16-3265-4afa-b57d-84c9535b40da", 00:09:14.591 "is_configured": false, 00:09:14.591 "data_offset": 0, 00:09:14.591 "data_size": 63488 00:09:14.591 }, 00:09:14.591 { 00:09:14.591 "name": null, 00:09:14.591 "uuid": "ddd70436-cbc0-4e80-9531-7c2151c031ab", 00:09:14.591 "is_configured": false, 00:09:14.591 "data_offset": 0, 00:09:14.591 "data_size": 63488 00:09:14.591 }, 00:09:14.591 { 00:09:14.591 "name": "BaseBdev3", 00:09:14.591 "uuid": "b26542bc-7370-4cb3-8b0d-bb3b88b0ca82", 00:09:14.591 "is_configured": true, 00:09:14.591 "data_offset": 2048, 00:09:14.591 "data_size": 63488 00:09:14.591 } 00:09:14.591 ] 00:09:14.591 }' 00:09:14.591 03:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.591 03:18:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.852 03:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.852 03:18:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.852 03:18:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.852 03:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:14.852 03:18:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.112 03:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:15.112 03:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:15.112 03:18:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.112 03:18:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.112 [2024-11-21 03:18:02.442133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:15.112 03:18:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.112 03:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:15.112 03:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.112 03:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.112 03:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.112 03:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.112 03:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.112 03:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.112 03:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.112 03:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.112 03:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.112 03:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.112 03:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.112 03:18:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.112 03:18:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.112 03:18:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.112 03:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.112 "name": "Existed_Raid", 00:09:15.112 "uuid": "a1a88061-72fc-40cb-a53c-cb7fb882a63e", 00:09:15.112 "strip_size_kb": 64, 00:09:15.112 "state": "configuring", 00:09:15.112 "raid_level": "raid0", 00:09:15.112 "superblock": true, 00:09:15.112 "num_base_bdevs": 3, 00:09:15.112 "num_base_bdevs_discovered": 2, 00:09:15.112 "num_base_bdevs_operational": 3, 00:09:15.112 "base_bdevs_list": [ 00:09:15.112 { 00:09:15.112 "name": null, 00:09:15.112 "uuid": "e5131a16-3265-4afa-b57d-84c9535b40da", 00:09:15.112 "is_configured": false, 00:09:15.112 "data_offset": 0, 00:09:15.112 "data_size": 63488 00:09:15.112 }, 00:09:15.112 { 00:09:15.112 "name": "BaseBdev2", 00:09:15.112 "uuid": "ddd70436-cbc0-4e80-9531-7c2151c031ab", 00:09:15.112 "is_configured": true, 00:09:15.112 "data_offset": 2048, 00:09:15.112 "data_size": 63488 00:09:15.112 }, 00:09:15.112 { 00:09:15.112 "name": "BaseBdev3", 00:09:15.112 "uuid": "b26542bc-7370-4cb3-8b0d-bb3b88b0ca82", 00:09:15.112 "is_configured": true, 00:09:15.112 "data_offset": 2048, 00:09:15.112 "data_size": 63488 00:09:15.112 } 00:09:15.112 ] 00:09:15.112 }' 00:09:15.112 03:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.112 03:18:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.372 03:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.372 03:18:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.372 03:18:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.372 03:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:15.372 03:18:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.631 03:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:15.631 03:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.631 03:18:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.631 03:18:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.631 03:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:15.631 03:18:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.631 03:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e5131a16-3265-4afa-b57d-84c9535b40da 00:09:15.631 03:18:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.631 03:18:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.631 [2024-11-21 03:18:03.005729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:15.631 [2024-11-21 03:18:03.006054] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:15.631 [2024-11-21 03:18:03.006109] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:15.631 [2024-11-21 03:18:03.006404] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:09:15.631 NewBaseBdev 00:09:15.631 [2024-11-21 03:18:03.006579] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:15.631 [2024-11-21 03:18:03.006638] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:15.631 [2024-11-21 03:18:03.006804] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.631 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.631 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:15.631 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:15.631 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:15.631 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:15.631 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:15.631 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:15.631 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:15.631 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.631 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.631 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.631 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:15.631 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.631 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.631 [ 00:09:15.631 { 00:09:15.631 "name": "NewBaseBdev", 00:09:15.631 "aliases": [ 00:09:15.631 "e5131a16-3265-4afa-b57d-84c9535b40da" 00:09:15.631 ], 00:09:15.631 "product_name": "Malloc disk", 00:09:15.631 "block_size": 512, 00:09:15.631 "num_blocks": 65536, 00:09:15.631 "uuid": "e5131a16-3265-4afa-b57d-84c9535b40da", 00:09:15.631 "assigned_rate_limits": { 00:09:15.631 "rw_ios_per_sec": 0, 00:09:15.631 "rw_mbytes_per_sec": 0, 00:09:15.631 "r_mbytes_per_sec": 0, 00:09:15.631 "w_mbytes_per_sec": 0 00:09:15.631 }, 00:09:15.631 "claimed": true, 00:09:15.631 "claim_type": "exclusive_write", 00:09:15.631 "zoned": false, 00:09:15.631 "supported_io_types": { 00:09:15.631 "read": true, 00:09:15.631 "write": true, 00:09:15.631 "unmap": true, 00:09:15.631 "flush": true, 00:09:15.631 "reset": true, 00:09:15.631 "nvme_admin": false, 00:09:15.631 "nvme_io": false, 00:09:15.631 "nvme_io_md": false, 00:09:15.631 "write_zeroes": true, 00:09:15.631 "zcopy": true, 00:09:15.631 "get_zone_info": false, 00:09:15.631 "zone_management": false, 00:09:15.631 "zone_append": false, 00:09:15.631 "compare": false, 00:09:15.631 "compare_and_write": false, 00:09:15.631 "abort": true, 00:09:15.631 "seek_hole": false, 00:09:15.632 "seek_data": false, 00:09:15.632 "copy": true, 00:09:15.632 "nvme_iov_md": false 00:09:15.632 }, 00:09:15.632 "memory_domains": [ 00:09:15.632 { 00:09:15.632 "dma_device_id": "system", 00:09:15.632 "dma_device_type": 1 00:09:15.632 }, 00:09:15.632 { 00:09:15.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.632 "dma_device_type": 2 00:09:15.632 } 00:09:15.632 ], 00:09:15.632 "driver_specific": {} 00:09:15.632 } 00:09:15.632 ] 00:09:15.632 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.632 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:15.632 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:15.632 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.632 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.632 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.632 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.632 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.632 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.632 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.632 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.632 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.632 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.632 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.632 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.632 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.632 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.632 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.632 "name": "Existed_Raid", 00:09:15.632 "uuid": "a1a88061-72fc-40cb-a53c-cb7fb882a63e", 00:09:15.632 "strip_size_kb": 64, 00:09:15.632 "state": "online", 00:09:15.632 "raid_level": "raid0", 00:09:15.632 "superblock": true, 00:09:15.632 "num_base_bdevs": 3, 00:09:15.632 "num_base_bdevs_discovered": 3, 00:09:15.632 "num_base_bdevs_operational": 3, 00:09:15.632 "base_bdevs_list": [ 00:09:15.632 { 00:09:15.632 "name": "NewBaseBdev", 00:09:15.632 "uuid": "e5131a16-3265-4afa-b57d-84c9535b40da", 00:09:15.632 "is_configured": true, 00:09:15.632 "data_offset": 2048, 00:09:15.632 "data_size": 63488 00:09:15.632 }, 00:09:15.632 { 00:09:15.632 "name": "BaseBdev2", 00:09:15.632 "uuid": "ddd70436-cbc0-4e80-9531-7c2151c031ab", 00:09:15.632 "is_configured": true, 00:09:15.632 "data_offset": 2048, 00:09:15.632 "data_size": 63488 00:09:15.632 }, 00:09:15.632 { 00:09:15.632 "name": "BaseBdev3", 00:09:15.632 "uuid": "b26542bc-7370-4cb3-8b0d-bb3b88b0ca82", 00:09:15.632 "is_configured": true, 00:09:15.632 "data_offset": 2048, 00:09:15.632 "data_size": 63488 00:09:15.632 } 00:09:15.632 ] 00:09:15.632 }' 00:09:15.632 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.632 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.198 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:16.198 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:16.198 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:16.198 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:16.198 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:16.198 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:16.198 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:16.198 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.198 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:16.198 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.198 [2024-11-21 03:18:03.542466] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.198 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.198 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:16.198 "name": "Existed_Raid", 00:09:16.198 "aliases": [ 00:09:16.198 "a1a88061-72fc-40cb-a53c-cb7fb882a63e" 00:09:16.198 ], 00:09:16.198 "product_name": "Raid Volume", 00:09:16.198 "block_size": 512, 00:09:16.198 "num_blocks": 190464, 00:09:16.198 "uuid": "a1a88061-72fc-40cb-a53c-cb7fb882a63e", 00:09:16.198 "assigned_rate_limits": { 00:09:16.198 "rw_ios_per_sec": 0, 00:09:16.198 "rw_mbytes_per_sec": 0, 00:09:16.198 "r_mbytes_per_sec": 0, 00:09:16.198 "w_mbytes_per_sec": 0 00:09:16.198 }, 00:09:16.198 "claimed": false, 00:09:16.198 "zoned": false, 00:09:16.198 "supported_io_types": { 00:09:16.198 "read": true, 00:09:16.198 "write": true, 00:09:16.198 "unmap": true, 00:09:16.198 "flush": true, 00:09:16.198 "reset": true, 00:09:16.198 "nvme_admin": false, 00:09:16.198 "nvme_io": false, 00:09:16.198 "nvme_io_md": false, 00:09:16.198 "write_zeroes": true, 00:09:16.198 "zcopy": false, 00:09:16.198 "get_zone_info": false, 00:09:16.198 "zone_management": false, 00:09:16.198 "zone_append": false, 00:09:16.198 "compare": false, 00:09:16.198 "compare_and_write": false, 00:09:16.198 "abort": false, 00:09:16.198 "seek_hole": false, 00:09:16.198 "seek_data": false, 00:09:16.198 "copy": false, 00:09:16.198 "nvme_iov_md": false 00:09:16.198 }, 00:09:16.198 "memory_domains": [ 00:09:16.198 { 00:09:16.198 "dma_device_id": "system", 00:09:16.198 "dma_device_type": 1 00:09:16.198 }, 00:09:16.198 { 00:09:16.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.198 "dma_device_type": 2 00:09:16.198 }, 00:09:16.198 { 00:09:16.198 "dma_device_id": "system", 00:09:16.198 "dma_device_type": 1 00:09:16.198 }, 00:09:16.198 { 00:09:16.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.198 "dma_device_type": 2 00:09:16.198 }, 00:09:16.198 { 00:09:16.198 "dma_device_id": "system", 00:09:16.198 "dma_device_type": 1 00:09:16.198 }, 00:09:16.198 { 00:09:16.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.198 "dma_device_type": 2 00:09:16.198 } 00:09:16.198 ], 00:09:16.198 "driver_specific": { 00:09:16.198 "raid": { 00:09:16.198 "uuid": "a1a88061-72fc-40cb-a53c-cb7fb882a63e", 00:09:16.198 "strip_size_kb": 64, 00:09:16.198 "state": "online", 00:09:16.198 "raid_level": "raid0", 00:09:16.198 "superblock": true, 00:09:16.198 "num_base_bdevs": 3, 00:09:16.198 "num_base_bdevs_discovered": 3, 00:09:16.198 "num_base_bdevs_operational": 3, 00:09:16.198 "base_bdevs_list": [ 00:09:16.198 { 00:09:16.198 "name": "NewBaseBdev", 00:09:16.198 "uuid": "e5131a16-3265-4afa-b57d-84c9535b40da", 00:09:16.198 "is_configured": true, 00:09:16.198 "data_offset": 2048, 00:09:16.198 "data_size": 63488 00:09:16.198 }, 00:09:16.198 { 00:09:16.198 "name": "BaseBdev2", 00:09:16.198 "uuid": "ddd70436-cbc0-4e80-9531-7c2151c031ab", 00:09:16.198 "is_configured": true, 00:09:16.198 "data_offset": 2048, 00:09:16.198 "data_size": 63488 00:09:16.198 }, 00:09:16.198 { 00:09:16.199 "name": "BaseBdev3", 00:09:16.199 "uuid": "b26542bc-7370-4cb3-8b0d-bb3b88b0ca82", 00:09:16.199 "is_configured": true, 00:09:16.199 "data_offset": 2048, 00:09:16.199 "data_size": 63488 00:09:16.199 } 00:09:16.199 ] 00:09:16.199 } 00:09:16.199 } 00:09:16.199 }' 00:09:16.199 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:16.199 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:16.199 BaseBdev2 00:09:16.199 BaseBdev3' 00:09:16.199 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.199 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:16.199 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.199 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:16.199 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.199 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.199 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.199 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.199 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.199 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.199 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.199 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.199 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:16.199 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.199 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.457 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.457 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.457 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.457 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.457 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.457 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:16.457 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.457 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.457 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.457 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.457 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.457 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:16.457 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.457 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.457 [2024-11-21 03:18:03.846287] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:16.457 [2024-11-21 03:18:03.846402] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.457 [2024-11-21 03:18:03.846523] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.457 [2024-11-21 03:18:03.846621] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:16.457 [2024-11-21 03:18:03.846675] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:16.457 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.457 03:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 77648 00:09:16.457 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77648 ']' 00:09:16.457 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 77648 00:09:16.457 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:16.457 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.457 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77648 00:09:16.457 killing process with pid 77648 00:09:16.457 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:16.457 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:16.457 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77648' 00:09:16.457 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 77648 00:09:16.457 [2024-11-21 03:18:03.885768] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:16.457 03:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 77648 00:09:16.457 [2024-11-21 03:18:03.918942] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:16.715 03:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:16.715 00:09:16.715 real 0m9.107s 00:09:16.715 user 0m15.662s 00:09:16.715 sys 0m1.691s 00:09:16.715 03:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.715 03:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.715 ************************************ 00:09:16.715 END TEST raid_state_function_test_sb 00:09:16.715 ************************************ 00:09:16.715 03:18:04 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:09:16.715 03:18:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:16.715 03:18:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.715 03:18:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:16.715 ************************************ 00:09:16.715 START TEST raid_superblock_test 00:09:16.715 ************************************ 00:09:16.715 03:18:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:09:16.715 03:18:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:16.715 03:18:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:16.715 03:18:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:16.715 03:18:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:16.715 03:18:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:16.715 03:18:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:16.715 03:18:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:16.715 03:18:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:16.715 03:18:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:16.715 03:18:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:16.715 03:18:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:16.715 03:18:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:16.715 03:18:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:16.715 03:18:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:16.716 03:18:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:16.716 03:18:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:16.716 03:18:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=78257 00:09:16.716 03:18:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 78257 00:09:16.716 03:18:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 78257 ']' 00:09:16.716 03:18:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:16.716 03:18:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.716 03:18:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.716 03:18:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.716 03:18:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.716 03:18:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.974 [2024-11-21 03:18:04.300616] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:09:16.974 [2024-11-21 03:18:04.300914] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78257 ] 00:09:16.974 [2024-11-21 03:18:04.441754] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:16.974 [2024-11-21 03:18:04.476551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.974 [2024-11-21 03:18:04.509526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.233 [2024-11-21 03:18:04.555648] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.233 [2024-11-21 03:18:04.555692] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.801 malloc1 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.801 [2024-11-21 03:18:05.276644] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:17.801 [2024-11-21 03:18:05.276739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.801 [2024-11-21 03:18:05.276775] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:17.801 [2024-11-21 03:18:05.276791] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.801 [2024-11-21 03:18:05.279807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.801 [2024-11-21 03:18:05.279855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:17.801 pt1 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.801 malloc2 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.801 [2024-11-21 03:18:05.313059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:17.801 [2024-11-21 03:18:05.313149] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.801 [2024-11-21 03:18:05.313175] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:17.801 [2024-11-21 03:18:05.313187] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.801 [2024-11-21 03:18:05.316158] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.801 [2024-11-21 03:18:05.316207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:17.801 pt2 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.801 malloc3 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.801 [2024-11-21 03:18:05.353589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:17.801 [2024-11-21 03:18:05.353771] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.801 [2024-11-21 03:18:05.353823] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:17.801 [2024-11-21 03:18:05.353867] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.801 [2024-11-21 03:18:05.356882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.801 [2024-11-21 03:18:05.356978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:17.801 pt3 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.801 03:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.061 [2024-11-21 03:18:05.369672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:18.061 [2024-11-21 03:18:05.372349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:18.061 [2024-11-21 03:18:05.372476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:18.061 [2024-11-21 03:18:05.372708] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:09:18.061 [2024-11-21 03:18:05.372766] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:18.061 [2024-11-21 03:18:05.373165] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:18.061 [2024-11-21 03:18:05.373398] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:09:18.061 [2024-11-21 03:18:05.373447] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:09:18.061 [2024-11-21 03:18:05.373696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:18.061 03:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.061 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:18.061 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.061 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.061 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:18.061 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.061 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.061 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.061 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.061 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.061 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.061 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.061 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.061 03:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.061 03:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.061 03:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.061 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.061 "name": "raid_bdev1", 00:09:18.061 "uuid": "4a0356e6-41ef-408e-bee7-56d9d0cdef3a", 00:09:18.061 "strip_size_kb": 64, 00:09:18.061 "state": "online", 00:09:18.061 "raid_level": "raid0", 00:09:18.061 "superblock": true, 00:09:18.061 "num_base_bdevs": 3, 00:09:18.061 "num_base_bdevs_discovered": 3, 00:09:18.061 "num_base_bdevs_operational": 3, 00:09:18.061 "base_bdevs_list": [ 00:09:18.061 { 00:09:18.061 "name": "pt1", 00:09:18.061 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:18.061 "is_configured": true, 00:09:18.061 "data_offset": 2048, 00:09:18.061 "data_size": 63488 00:09:18.061 }, 00:09:18.061 { 00:09:18.061 "name": "pt2", 00:09:18.061 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.061 "is_configured": true, 00:09:18.061 "data_offset": 2048, 00:09:18.061 "data_size": 63488 00:09:18.061 }, 00:09:18.061 { 00:09:18.061 "name": "pt3", 00:09:18.061 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:18.061 "is_configured": true, 00:09:18.061 "data_offset": 2048, 00:09:18.061 "data_size": 63488 00:09:18.061 } 00:09:18.061 ] 00:09:18.061 }' 00:09:18.061 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.061 03:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.322 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:18.322 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:18.322 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:18.322 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:18.322 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:18.322 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:18.322 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:18.322 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:18.322 03:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.322 03:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.322 [2024-11-21 03:18:05.830294] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:18.322 03:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.322 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:18.322 "name": "raid_bdev1", 00:09:18.322 "aliases": [ 00:09:18.322 "4a0356e6-41ef-408e-bee7-56d9d0cdef3a" 00:09:18.322 ], 00:09:18.322 "product_name": "Raid Volume", 00:09:18.322 "block_size": 512, 00:09:18.322 "num_blocks": 190464, 00:09:18.322 "uuid": "4a0356e6-41ef-408e-bee7-56d9d0cdef3a", 00:09:18.322 "assigned_rate_limits": { 00:09:18.322 "rw_ios_per_sec": 0, 00:09:18.322 "rw_mbytes_per_sec": 0, 00:09:18.322 "r_mbytes_per_sec": 0, 00:09:18.322 "w_mbytes_per_sec": 0 00:09:18.322 }, 00:09:18.322 "claimed": false, 00:09:18.322 "zoned": false, 00:09:18.322 "supported_io_types": { 00:09:18.322 "read": true, 00:09:18.322 "write": true, 00:09:18.322 "unmap": true, 00:09:18.322 "flush": true, 00:09:18.322 "reset": true, 00:09:18.322 "nvme_admin": false, 00:09:18.322 "nvme_io": false, 00:09:18.322 "nvme_io_md": false, 00:09:18.322 "write_zeroes": true, 00:09:18.322 "zcopy": false, 00:09:18.322 "get_zone_info": false, 00:09:18.322 "zone_management": false, 00:09:18.322 "zone_append": false, 00:09:18.322 "compare": false, 00:09:18.322 "compare_and_write": false, 00:09:18.322 "abort": false, 00:09:18.322 "seek_hole": false, 00:09:18.322 "seek_data": false, 00:09:18.322 "copy": false, 00:09:18.322 "nvme_iov_md": false 00:09:18.322 }, 00:09:18.322 "memory_domains": [ 00:09:18.322 { 00:09:18.322 "dma_device_id": "system", 00:09:18.322 "dma_device_type": 1 00:09:18.322 }, 00:09:18.322 { 00:09:18.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.322 "dma_device_type": 2 00:09:18.322 }, 00:09:18.322 { 00:09:18.322 "dma_device_id": "system", 00:09:18.322 "dma_device_type": 1 00:09:18.322 }, 00:09:18.322 { 00:09:18.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.322 "dma_device_type": 2 00:09:18.322 }, 00:09:18.322 { 00:09:18.322 "dma_device_id": "system", 00:09:18.322 "dma_device_type": 1 00:09:18.322 }, 00:09:18.322 { 00:09:18.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.322 "dma_device_type": 2 00:09:18.322 } 00:09:18.322 ], 00:09:18.322 "driver_specific": { 00:09:18.322 "raid": { 00:09:18.322 "uuid": "4a0356e6-41ef-408e-bee7-56d9d0cdef3a", 00:09:18.322 "strip_size_kb": 64, 00:09:18.322 "state": "online", 00:09:18.322 "raid_level": "raid0", 00:09:18.322 "superblock": true, 00:09:18.322 "num_base_bdevs": 3, 00:09:18.322 "num_base_bdevs_discovered": 3, 00:09:18.322 "num_base_bdevs_operational": 3, 00:09:18.322 "base_bdevs_list": [ 00:09:18.322 { 00:09:18.322 "name": "pt1", 00:09:18.322 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:18.322 "is_configured": true, 00:09:18.322 "data_offset": 2048, 00:09:18.322 "data_size": 63488 00:09:18.322 }, 00:09:18.322 { 00:09:18.322 "name": "pt2", 00:09:18.322 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.322 "is_configured": true, 00:09:18.322 "data_offset": 2048, 00:09:18.322 "data_size": 63488 00:09:18.322 }, 00:09:18.322 { 00:09:18.322 "name": "pt3", 00:09:18.322 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:18.322 "is_configured": true, 00:09:18.322 "data_offset": 2048, 00:09:18.322 "data_size": 63488 00:09:18.322 } 00:09:18.322 ] 00:09:18.322 } 00:09:18.322 } 00:09:18.322 }' 00:09:18.322 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:18.582 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:18.582 pt2 00:09:18.582 pt3' 00:09:18.582 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.582 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:18.582 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.582 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:18.582 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.582 03:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.582 03:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.582 03:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.582 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.582 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.582 03:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.582 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:18.582 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.582 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.582 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.582 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.582 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.582 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.582 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.582 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:18.582 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.582 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.583 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.583 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.583 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.583 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.583 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:18.583 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:18.583 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.583 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.583 [2024-11-21 03:18:06.106225] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:18.583 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.842 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4a0356e6-41ef-408e-bee7-56d9d0cdef3a 00:09:18.842 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4a0356e6-41ef-408e-bee7-56d9d0cdef3a ']' 00:09:18.842 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:18.842 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.842 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.842 [2024-11-21 03:18:06.153909] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:18.842 [2024-11-21 03:18:06.154106] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:18.842 [2024-11-21 03:18:06.154228] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:18.842 [2024-11-21 03:18:06.154306] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:18.842 [2024-11-21 03:18:06.154320] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:09:18.842 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.842 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.842 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.843 [2024-11-21 03:18:06.329983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:18.843 [2024-11-21 03:18:06.332312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:18.843 [2024-11-21 03:18:06.332373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:18.843 [2024-11-21 03:18:06.332423] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:18.843 [2024-11-21 03:18:06.332481] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:18.843 [2024-11-21 03:18:06.332499] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:18.843 [2024-11-21 03:18:06.332514] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:18.843 [2024-11-21 03:18:06.332524] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:09:18.843 request: 00:09:18.843 { 00:09:18.843 "name": "raid_bdev1", 00:09:18.843 "raid_level": "raid0", 00:09:18.843 "base_bdevs": [ 00:09:18.843 "malloc1", 00:09:18.843 "malloc2", 00:09:18.843 "malloc3" 00:09:18.843 ], 00:09:18.843 "strip_size_kb": 64, 00:09:18.843 "superblock": false, 00:09:18.843 "method": "bdev_raid_create", 00:09:18.843 "req_id": 1 00:09:18.843 } 00:09:18.843 Got JSON-RPC error response 00:09:18.843 response: 00:09:18.843 { 00:09:18.843 "code": -17, 00:09:18.843 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:18.843 } 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.843 [2024-11-21 03:18:06.397941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:18.843 [2024-11-21 03:18:06.398040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.843 [2024-11-21 03:18:06.398065] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:18.843 [2024-11-21 03:18:06.398074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.843 [2024-11-21 03:18:06.400619] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.843 [2024-11-21 03:18:06.400744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:18.843 [2024-11-21 03:18:06.400859] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:18.843 [2024-11-21 03:18:06.400900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:18.843 pt1 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.843 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.844 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.102 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.102 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.102 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.102 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.102 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.102 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.102 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.102 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.102 "name": "raid_bdev1", 00:09:19.102 "uuid": "4a0356e6-41ef-408e-bee7-56d9d0cdef3a", 00:09:19.102 "strip_size_kb": 64, 00:09:19.102 "state": "configuring", 00:09:19.102 "raid_level": "raid0", 00:09:19.102 "superblock": true, 00:09:19.102 "num_base_bdevs": 3, 00:09:19.102 "num_base_bdevs_discovered": 1, 00:09:19.102 "num_base_bdevs_operational": 3, 00:09:19.102 "base_bdevs_list": [ 00:09:19.102 { 00:09:19.102 "name": "pt1", 00:09:19.102 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:19.102 "is_configured": true, 00:09:19.102 "data_offset": 2048, 00:09:19.102 "data_size": 63488 00:09:19.102 }, 00:09:19.102 { 00:09:19.102 "name": null, 00:09:19.102 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:19.102 "is_configured": false, 00:09:19.102 "data_offset": 2048, 00:09:19.102 "data_size": 63488 00:09:19.102 }, 00:09:19.102 { 00:09:19.103 "name": null, 00:09:19.103 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:19.103 "is_configured": false, 00:09:19.103 "data_offset": 2048, 00:09:19.103 "data_size": 63488 00:09:19.103 } 00:09:19.103 ] 00:09:19.103 }' 00:09:19.103 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.103 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.362 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:19.362 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:19.362 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.362 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.362 [2024-11-21 03:18:06.874166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:19.362 [2024-11-21 03:18:06.874325] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.362 [2024-11-21 03:18:06.874392] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:19.362 [2024-11-21 03:18:06.874438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.362 [2024-11-21 03:18:06.875125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.362 [2024-11-21 03:18:06.875208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:19.362 [2024-11-21 03:18:06.875373] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:19.362 [2024-11-21 03:18:06.875450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:19.362 pt2 00:09:19.362 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.362 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:19.362 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.362 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.362 [2024-11-21 03:18:06.886232] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:19.362 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.362 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:19.362 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.362 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.362 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:19.362 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.362 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.362 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.362 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.362 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.362 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.362 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.362 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.362 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.362 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.362 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.622 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.622 "name": "raid_bdev1", 00:09:19.622 "uuid": "4a0356e6-41ef-408e-bee7-56d9d0cdef3a", 00:09:19.622 "strip_size_kb": 64, 00:09:19.622 "state": "configuring", 00:09:19.622 "raid_level": "raid0", 00:09:19.622 "superblock": true, 00:09:19.622 "num_base_bdevs": 3, 00:09:19.622 "num_base_bdevs_discovered": 1, 00:09:19.622 "num_base_bdevs_operational": 3, 00:09:19.622 "base_bdevs_list": [ 00:09:19.622 { 00:09:19.622 "name": "pt1", 00:09:19.622 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:19.623 "is_configured": true, 00:09:19.623 "data_offset": 2048, 00:09:19.623 "data_size": 63488 00:09:19.623 }, 00:09:19.623 { 00:09:19.623 "name": null, 00:09:19.623 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:19.623 "is_configured": false, 00:09:19.623 "data_offset": 0, 00:09:19.623 "data_size": 63488 00:09:19.623 }, 00:09:19.623 { 00:09:19.623 "name": null, 00:09:19.623 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:19.623 "is_configured": false, 00:09:19.623 "data_offset": 2048, 00:09:19.623 "data_size": 63488 00:09:19.623 } 00:09:19.623 ] 00:09:19.623 }' 00:09:19.623 03:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.623 03:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.883 03:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:19.883 03:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:19.883 03:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:19.883 03:18:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.883 03:18:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.883 [2024-11-21 03:18:07.414364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:19.883 [2024-11-21 03:18:07.414513] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.883 [2024-11-21 03:18:07.414557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:19.883 [2024-11-21 03:18:07.414591] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.883 [2024-11-21 03:18:07.415163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.883 [2024-11-21 03:18:07.415228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:19.883 [2024-11-21 03:18:07.415357] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:19.883 [2024-11-21 03:18:07.415417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:19.883 pt2 00:09:19.883 03:18:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.883 03:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:19.883 03:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:19.883 03:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:19.883 03:18:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.883 03:18:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.883 [2024-11-21 03:18:07.426285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:19.883 [2024-11-21 03:18:07.426378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.883 [2024-11-21 03:18:07.426418] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:19.883 [2024-11-21 03:18:07.426453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.883 [2024-11-21 03:18:07.426851] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.883 [2024-11-21 03:18:07.426918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:19.884 [2024-11-21 03:18:07.427012] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:19.884 [2024-11-21 03:18:07.427074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:19.884 [2024-11-21 03:18:07.427204] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:19.884 [2024-11-21 03:18:07.427247] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:19.884 [2024-11-21 03:18:07.427556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:09:19.884 [2024-11-21 03:18:07.427715] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:19.884 [2024-11-21 03:18:07.427756] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:09:19.884 [2024-11-21 03:18:07.427905] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.884 pt3 00:09:19.884 03:18:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.884 03:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:19.884 03:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:19.884 03:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:19.884 03:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.884 03:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.884 03:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:19.884 03:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.884 03:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.884 03:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.884 03:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.884 03:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.884 03:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.884 03:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.884 03:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.884 03:18:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.884 03:18:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.143 03:18:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.143 03:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.143 "name": "raid_bdev1", 00:09:20.143 "uuid": "4a0356e6-41ef-408e-bee7-56d9d0cdef3a", 00:09:20.143 "strip_size_kb": 64, 00:09:20.143 "state": "online", 00:09:20.143 "raid_level": "raid0", 00:09:20.143 "superblock": true, 00:09:20.143 "num_base_bdevs": 3, 00:09:20.143 "num_base_bdevs_discovered": 3, 00:09:20.143 "num_base_bdevs_operational": 3, 00:09:20.143 "base_bdevs_list": [ 00:09:20.143 { 00:09:20.143 "name": "pt1", 00:09:20.143 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:20.143 "is_configured": true, 00:09:20.143 "data_offset": 2048, 00:09:20.143 "data_size": 63488 00:09:20.143 }, 00:09:20.143 { 00:09:20.143 "name": "pt2", 00:09:20.143 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.143 "is_configured": true, 00:09:20.143 "data_offset": 2048, 00:09:20.143 "data_size": 63488 00:09:20.143 }, 00:09:20.143 { 00:09:20.143 "name": "pt3", 00:09:20.143 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:20.143 "is_configured": true, 00:09:20.143 "data_offset": 2048, 00:09:20.143 "data_size": 63488 00:09:20.143 } 00:09:20.143 ] 00:09:20.143 }' 00:09:20.143 03:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.143 03:18:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.402 03:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:20.402 03:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:20.402 03:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:20.402 03:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:20.402 03:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:20.402 03:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:20.402 03:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:20.402 03:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:20.402 03:18:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.402 03:18:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.402 [2024-11-21 03:18:07.898779] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:20.402 03:18:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.402 03:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:20.402 "name": "raid_bdev1", 00:09:20.402 "aliases": [ 00:09:20.402 "4a0356e6-41ef-408e-bee7-56d9d0cdef3a" 00:09:20.402 ], 00:09:20.402 "product_name": "Raid Volume", 00:09:20.402 "block_size": 512, 00:09:20.402 "num_blocks": 190464, 00:09:20.402 "uuid": "4a0356e6-41ef-408e-bee7-56d9d0cdef3a", 00:09:20.402 "assigned_rate_limits": { 00:09:20.402 "rw_ios_per_sec": 0, 00:09:20.402 "rw_mbytes_per_sec": 0, 00:09:20.402 "r_mbytes_per_sec": 0, 00:09:20.402 "w_mbytes_per_sec": 0 00:09:20.402 }, 00:09:20.402 "claimed": false, 00:09:20.402 "zoned": false, 00:09:20.402 "supported_io_types": { 00:09:20.402 "read": true, 00:09:20.402 "write": true, 00:09:20.402 "unmap": true, 00:09:20.402 "flush": true, 00:09:20.402 "reset": true, 00:09:20.402 "nvme_admin": false, 00:09:20.402 "nvme_io": false, 00:09:20.402 "nvme_io_md": false, 00:09:20.402 "write_zeroes": true, 00:09:20.402 "zcopy": false, 00:09:20.402 "get_zone_info": false, 00:09:20.402 "zone_management": false, 00:09:20.402 "zone_append": false, 00:09:20.402 "compare": false, 00:09:20.402 "compare_and_write": false, 00:09:20.402 "abort": false, 00:09:20.402 "seek_hole": false, 00:09:20.402 "seek_data": false, 00:09:20.402 "copy": false, 00:09:20.402 "nvme_iov_md": false 00:09:20.402 }, 00:09:20.402 "memory_domains": [ 00:09:20.402 { 00:09:20.402 "dma_device_id": "system", 00:09:20.402 "dma_device_type": 1 00:09:20.402 }, 00:09:20.402 { 00:09:20.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.402 "dma_device_type": 2 00:09:20.402 }, 00:09:20.402 { 00:09:20.402 "dma_device_id": "system", 00:09:20.402 "dma_device_type": 1 00:09:20.402 }, 00:09:20.402 { 00:09:20.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.402 "dma_device_type": 2 00:09:20.402 }, 00:09:20.402 { 00:09:20.402 "dma_device_id": "system", 00:09:20.402 "dma_device_type": 1 00:09:20.402 }, 00:09:20.402 { 00:09:20.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.402 "dma_device_type": 2 00:09:20.402 } 00:09:20.402 ], 00:09:20.402 "driver_specific": { 00:09:20.402 "raid": { 00:09:20.402 "uuid": "4a0356e6-41ef-408e-bee7-56d9d0cdef3a", 00:09:20.402 "strip_size_kb": 64, 00:09:20.402 "state": "online", 00:09:20.402 "raid_level": "raid0", 00:09:20.402 "superblock": true, 00:09:20.402 "num_base_bdevs": 3, 00:09:20.402 "num_base_bdevs_discovered": 3, 00:09:20.402 "num_base_bdevs_operational": 3, 00:09:20.402 "base_bdevs_list": [ 00:09:20.402 { 00:09:20.402 "name": "pt1", 00:09:20.402 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:20.402 "is_configured": true, 00:09:20.402 "data_offset": 2048, 00:09:20.402 "data_size": 63488 00:09:20.402 }, 00:09:20.402 { 00:09:20.402 "name": "pt2", 00:09:20.402 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.402 "is_configured": true, 00:09:20.402 "data_offset": 2048, 00:09:20.402 "data_size": 63488 00:09:20.402 }, 00:09:20.402 { 00:09:20.402 "name": "pt3", 00:09:20.402 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:20.402 "is_configured": true, 00:09:20.402 "data_offset": 2048, 00:09:20.402 "data_size": 63488 00:09:20.402 } 00:09:20.402 ] 00:09:20.402 } 00:09:20.402 } 00:09:20.402 }' 00:09:20.402 03:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:20.662 03:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:20.662 pt2 00:09:20.662 pt3' 00:09:20.662 03:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.662 03:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:20.662 03:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.662 03:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:20.662 03:18:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.662 03:18:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.662 03:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.662 03:18:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.662 03:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.662 03:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.662 03:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.662 03:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.662 03:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:20.662 03:18:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.662 03:18:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.662 03:18:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.662 03:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.662 03:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.662 03:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.662 03:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.662 03:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:20.662 03:18:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.662 03:18:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.662 03:18:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.662 03:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.663 03:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.663 03:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:20.663 03:18:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.663 03:18:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.663 03:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:20.663 [2024-11-21 03:18:08.182788] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:20.663 03:18:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.663 03:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4a0356e6-41ef-408e-bee7-56d9d0cdef3a '!=' 4a0356e6-41ef-408e-bee7-56d9d0cdef3a ']' 00:09:20.663 03:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:20.663 03:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:20.663 03:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:20.663 03:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 78257 00:09:20.663 03:18:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 78257 ']' 00:09:20.663 03:18:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 78257 00:09:20.946 03:18:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:20.946 03:18:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:20.946 03:18:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78257 00:09:20.946 03:18:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:20.946 03:18:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:20.946 03:18:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78257' 00:09:20.946 killing process with pid 78257 00:09:20.946 03:18:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 78257 00:09:20.946 [2024-11-21 03:18:08.263851] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:20.946 03:18:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 78257 00:09:20.946 [2024-11-21 03:18:08.264057] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.946 [2024-11-21 03:18:08.264146] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:20.946 [2024-11-21 03:18:08.264161] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:09:20.946 [2024-11-21 03:18:08.328076] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:21.207 03:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:21.207 00:09:21.207 real 0m4.455s 00:09:21.207 user 0m6.995s 00:09:21.207 sys 0m0.932s 00:09:21.207 03:18:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.207 03:18:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.207 ************************************ 00:09:21.207 END TEST raid_superblock_test 00:09:21.207 ************************************ 00:09:21.207 03:18:08 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:21.207 03:18:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:21.207 03:18:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.207 03:18:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:21.207 ************************************ 00:09:21.207 START TEST raid_read_error_test 00:09:21.207 ************************************ 00:09:21.207 03:18:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:09:21.207 03:18:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:21.207 03:18:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:21.207 03:18:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:21.207 03:18:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:21.207 03:18:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.207 03:18:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:21.207 03:18:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:21.208 03:18:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.208 03:18:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:21.208 03:18:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:21.208 03:18:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.208 03:18:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:21.208 03:18:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:21.208 03:18:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.208 03:18:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:21.208 03:18:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:21.208 03:18:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:21.208 03:18:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:21.208 03:18:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:21.208 03:18:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:21.208 03:18:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:21.208 03:18:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:21.208 03:18:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:21.208 03:18:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:21.208 03:18:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:21.208 03:18:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kmAF6eBrJR 00:09:21.208 03:18:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78499 00:09:21.208 03:18:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78499 00:09:21.208 03:18:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 78499 ']' 00:09:21.208 03:18:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.208 03:18:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.208 03:18:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.208 03:18:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.208 03:18:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:21.208 03:18:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.468 [2024-11-21 03:18:08.823001] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:09:21.469 [2024-11-21 03:18:08.823154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78499 ] 00:09:21.469 [2024-11-21 03:18:08.962402] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:21.469 [2024-11-21 03:18:08.987773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.469 [2024-11-21 03:18:09.029751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.730 [2024-11-21 03:18:09.110630] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.730 [2024-11-21 03:18:09.110689] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.301 BaseBdev1_malloc 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.301 true 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.301 [2024-11-21 03:18:09.720903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:22.301 [2024-11-21 03:18:09.721000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.301 [2024-11-21 03:18:09.721041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:22.301 [2024-11-21 03:18:09.721068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.301 [2024-11-21 03:18:09.723699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.301 [2024-11-21 03:18:09.723742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:22.301 BaseBdev1 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.301 BaseBdev2_malloc 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.301 true 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.301 [2024-11-21 03:18:09.764574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:22.301 [2024-11-21 03:18:09.764646] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.301 [2024-11-21 03:18:09.764665] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:22.301 [2024-11-21 03:18:09.764676] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.301 [2024-11-21 03:18:09.767138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.301 [2024-11-21 03:18:09.767176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:22.301 BaseBdev2 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.301 BaseBdev3_malloc 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.301 true 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.301 [2024-11-21 03:18:09.804358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:22.301 [2024-11-21 03:18:09.804435] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.301 [2024-11-21 03:18:09.804455] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:22.301 [2024-11-21 03:18:09.804468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.301 [2024-11-21 03:18:09.807088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.301 [2024-11-21 03:18:09.807129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:22.301 BaseBdev3 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.301 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.301 [2024-11-21 03:18:09.812384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:22.301 [2024-11-21 03:18:09.814561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:22.301 [2024-11-21 03:18:09.814637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:22.301 [2024-11-21 03:18:09.814827] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:22.301 [2024-11-21 03:18:09.814840] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:22.301 [2024-11-21 03:18:09.815129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006970 00:09:22.301 [2024-11-21 03:18:09.815299] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:22.301 [2024-11-21 03:18:09.815313] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:22.301 [2024-11-21 03:18:09.815441] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.302 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.302 03:18:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:22.302 03:18:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:22.302 03:18:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.302 03:18:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:22.302 03:18:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.302 03:18:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.302 03:18:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.302 03:18:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.302 03:18:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.302 03:18:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.302 03:18:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.302 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.302 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.302 03:18:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:22.302 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.302 03:18:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.302 "name": "raid_bdev1", 00:09:22.302 "uuid": "f95198bb-1992-400c-9722-5f1b2b7623be", 00:09:22.302 "strip_size_kb": 64, 00:09:22.302 "state": "online", 00:09:22.302 "raid_level": "raid0", 00:09:22.302 "superblock": true, 00:09:22.302 "num_base_bdevs": 3, 00:09:22.302 "num_base_bdevs_discovered": 3, 00:09:22.302 "num_base_bdevs_operational": 3, 00:09:22.302 "base_bdevs_list": [ 00:09:22.302 { 00:09:22.302 "name": "BaseBdev1", 00:09:22.302 "uuid": "98c0c285-aade-5bd9-bec0-0519b2a04f4f", 00:09:22.302 "is_configured": true, 00:09:22.302 "data_offset": 2048, 00:09:22.302 "data_size": 63488 00:09:22.302 }, 00:09:22.302 { 00:09:22.302 "name": "BaseBdev2", 00:09:22.302 "uuid": "107b6145-6df9-5937-9edb-98c774f760a0", 00:09:22.302 "is_configured": true, 00:09:22.302 "data_offset": 2048, 00:09:22.302 "data_size": 63488 00:09:22.302 }, 00:09:22.302 { 00:09:22.302 "name": "BaseBdev3", 00:09:22.302 "uuid": "e985037e-f249-5a7e-b029-5fd0f3c17baf", 00:09:22.302 "is_configured": true, 00:09:22.302 "data_offset": 2048, 00:09:22.302 "data_size": 63488 00:09:22.302 } 00:09:22.302 ] 00:09:22.302 }' 00:09:22.302 03:18:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.302 03:18:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.871 03:18:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:22.871 03:18:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:22.871 [2024-11-21 03:18:10.361085] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006b10 00:09:23.811 03:18:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:23.811 03:18:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.811 03:18:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.811 03:18:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.812 03:18:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:23.812 03:18:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:23.812 03:18:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:23.812 03:18:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:23.812 03:18:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:23.812 03:18:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:23.812 03:18:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:23.812 03:18:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.812 03:18:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.812 03:18:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.812 03:18:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.812 03:18:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.812 03:18:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.812 03:18:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.812 03:18:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.812 03:18:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.812 03:18:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:23.812 03:18:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.812 03:18:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.812 "name": "raid_bdev1", 00:09:23.812 "uuid": "f95198bb-1992-400c-9722-5f1b2b7623be", 00:09:23.812 "strip_size_kb": 64, 00:09:23.812 "state": "online", 00:09:23.812 "raid_level": "raid0", 00:09:23.812 "superblock": true, 00:09:23.812 "num_base_bdevs": 3, 00:09:23.812 "num_base_bdevs_discovered": 3, 00:09:23.812 "num_base_bdevs_operational": 3, 00:09:23.812 "base_bdevs_list": [ 00:09:23.812 { 00:09:23.812 "name": "BaseBdev1", 00:09:23.812 "uuid": "98c0c285-aade-5bd9-bec0-0519b2a04f4f", 00:09:23.812 "is_configured": true, 00:09:23.812 "data_offset": 2048, 00:09:23.812 "data_size": 63488 00:09:23.812 }, 00:09:23.812 { 00:09:23.812 "name": "BaseBdev2", 00:09:23.812 "uuid": "107b6145-6df9-5937-9edb-98c774f760a0", 00:09:23.812 "is_configured": true, 00:09:23.812 "data_offset": 2048, 00:09:23.812 "data_size": 63488 00:09:23.812 }, 00:09:23.812 { 00:09:23.812 "name": "BaseBdev3", 00:09:23.812 "uuid": "e985037e-f249-5a7e-b029-5fd0f3c17baf", 00:09:23.812 "is_configured": true, 00:09:23.812 "data_offset": 2048, 00:09:23.812 "data_size": 63488 00:09:23.812 } 00:09:23.812 ] 00:09:23.812 }' 00:09:23.812 03:18:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.812 03:18:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.382 03:18:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:24.382 03:18:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.382 03:18:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.382 [2024-11-21 03:18:11.729994] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:24.382 [2024-11-21 03:18:11.730056] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:24.382 [2024-11-21 03:18:11.733234] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:24.382 [2024-11-21 03:18:11.733323] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.382 [2024-11-21 03:18:11.733377] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:24.382 [2024-11-21 03:18:11.733389] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:24.382 03:18:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.382 03:18:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78499 00:09:24.382 03:18:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 78499 ']' 00:09:24.382 03:18:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 78499 00:09:24.382 { 00:09:24.382 "results": [ 00:09:24.382 { 00:09:24.382 "job": "raid_bdev1", 00:09:24.382 "core_mask": "0x1", 00:09:24.382 "workload": "randrw", 00:09:24.382 "percentage": 50, 00:09:24.382 "status": "finished", 00:09:24.382 "queue_depth": 1, 00:09:24.382 "io_size": 131072, 00:09:24.382 "runtime": 1.366618, 00:09:24.382 "iops": 14005.376776831565, 00:09:24.382 "mibps": 1750.6720971039456, 00:09:24.382 "io_failed": 1, 00:09:24.382 "io_timeout": 0, 00:09:24.382 "avg_latency_us": 100.22536704167793, 00:09:24.382 "min_latency_us": 22.759522356837792, 00:09:24.382 "max_latency_us": 1485.1704000697289 00:09:24.382 } 00:09:24.382 ], 00:09:24.382 "core_count": 1 00:09:24.382 } 00:09:24.382 03:18:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:24.382 03:18:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.382 03:18:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78499 00:09:24.382 03:18:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:24.382 03:18:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:24.382 killing process with pid 78499 00:09:24.382 03:18:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78499' 00:09:24.382 03:18:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 78499 00:09:24.382 03:18:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 78499 00:09:24.382 [2024-11-21 03:18:11.780057] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:24.382 [2024-11-21 03:18:11.832359] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:24.642 03:18:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kmAF6eBrJR 00:09:24.642 03:18:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:24.642 03:18:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:24.642 03:18:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:24.642 03:18:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:24.642 03:18:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:24.642 03:18:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:24.642 03:18:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:24.642 00:09:24.642 real 0m3.473s 00:09:24.642 user 0m4.292s 00:09:24.642 sys 0m0.644s 00:09:24.642 03:18:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.642 03:18:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.642 ************************************ 00:09:24.642 END TEST raid_read_error_test 00:09:24.642 ************************************ 00:09:24.903 03:18:12 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:24.903 03:18:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:24.903 03:18:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.903 03:18:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:24.903 ************************************ 00:09:24.903 START TEST raid_write_error_test 00:09:24.903 ************************************ 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.AnZOVLLgOA 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78628 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78628 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 78628 ']' 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:24.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:24.903 03:18:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.903 [2024-11-21 03:18:12.371408] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:09:24.903 [2024-11-21 03:18:12.371548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78628 ] 00:09:25.164 [2024-11-21 03:18:12.514265] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:25.164 [2024-11-21 03:18:12.553773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.164 [2024-11-21 03:18:12.595273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.164 [2024-11-21 03:18:12.673235] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:25.164 [2024-11-21 03:18:12.673287] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.104 BaseBdev1_malloc 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.104 true 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.104 [2024-11-21 03:18:13.356629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:26.104 [2024-11-21 03:18:13.356718] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.104 [2024-11-21 03:18:13.356769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:26.104 [2024-11-21 03:18:13.356790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.104 [2024-11-21 03:18:13.359916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.104 [2024-11-21 03:18:13.359978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:26.104 BaseBdev1 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.104 BaseBdev2_malloc 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.104 true 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.104 [2024-11-21 03:18:13.394036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:26.104 [2024-11-21 03:18:13.394109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.104 [2024-11-21 03:18:13.394132] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:26.104 [2024-11-21 03:18:13.394146] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.104 [2024-11-21 03:18:13.397289] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.104 [2024-11-21 03:18:13.397336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:26.104 BaseBdev2 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.104 BaseBdev3_malloc 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.104 true 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.104 [2024-11-21 03:18:13.431602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:26.104 [2024-11-21 03:18:13.431698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.104 [2024-11-21 03:18:13.431728] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:26.104 [2024-11-21 03:18:13.431745] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.104 [2024-11-21 03:18:13.434810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.104 [2024-11-21 03:18:13.434858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:26.104 BaseBdev3 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.104 [2024-11-21 03:18:13.439762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:26.104 [2024-11-21 03:18:13.452154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:26.104 [2024-11-21 03:18:13.452256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:26.104 [2024-11-21 03:18:13.452548] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:26.104 [2024-11-21 03:18:13.452564] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:26.104 [2024-11-21 03:18:13.452958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006970 00:09:26.104 [2024-11-21 03:18:13.453173] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:26.104 [2024-11-21 03:18:13.453191] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:26.104 [2024-11-21 03:18:13.453386] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.104 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.105 03:18:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.105 "name": "raid_bdev1", 00:09:26.105 "uuid": "1a244334-d96b-48ff-847d-b5cde53b0fe8", 00:09:26.105 "strip_size_kb": 64, 00:09:26.105 "state": "online", 00:09:26.105 "raid_level": "raid0", 00:09:26.105 "superblock": true, 00:09:26.105 "num_base_bdevs": 3, 00:09:26.105 "num_base_bdevs_discovered": 3, 00:09:26.105 "num_base_bdevs_operational": 3, 00:09:26.105 "base_bdevs_list": [ 00:09:26.105 { 00:09:26.105 "name": "BaseBdev1", 00:09:26.105 "uuid": "73f8510c-9a6b-5e42-b745-cbba140da0f0", 00:09:26.105 "is_configured": true, 00:09:26.105 "data_offset": 2048, 00:09:26.105 "data_size": 63488 00:09:26.105 }, 00:09:26.105 { 00:09:26.105 "name": "BaseBdev2", 00:09:26.105 "uuid": "76f38f33-89f5-5da2-a361-9bfb73e150be", 00:09:26.105 "is_configured": true, 00:09:26.105 "data_offset": 2048, 00:09:26.105 "data_size": 63488 00:09:26.105 }, 00:09:26.105 { 00:09:26.105 "name": "BaseBdev3", 00:09:26.105 "uuid": "44cd4bb4-54a4-5217-a3d5-2acc0cc85f8c", 00:09:26.105 "is_configured": true, 00:09:26.105 "data_offset": 2048, 00:09:26.105 "data_size": 63488 00:09:26.105 } 00:09:26.105 ] 00:09:26.105 }' 00:09:26.105 03:18:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.105 03:18:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.363 03:18:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:26.363 03:18:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:26.642 [2024-11-21 03:18:13.976513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006b10 00:09:27.579 03:18:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:27.579 03:18:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.579 03:18:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.579 03:18:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.579 03:18:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:27.579 03:18:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:27.579 03:18:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:27.579 03:18:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:27.579 03:18:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:27.579 03:18:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:27.579 03:18:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:27.579 03:18:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.579 03:18:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.579 03:18:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.579 03:18:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.580 03:18:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.580 03:18:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.580 03:18:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.580 03:18:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.580 03:18:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.580 03:18:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:27.580 03:18:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.580 03:18:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.580 "name": "raid_bdev1", 00:09:27.580 "uuid": "1a244334-d96b-48ff-847d-b5cde53b0fe8", 00:09:27.580 "strip_size_kb": 64, 00:09:27.580 "state": "online", 00:09:27.580 "raid_level": "raid0", 00:09:27.580 "superblock": true, 00:09:27.580 "num_base_bdevs": 3, 00:09:27.580 "num_base_bdevs_discovered": 3, 00:09:27.580 "num_base_bdevs_operational": 3, 00:09:27.580 "base_bdevs_list": [ 00:09:27.580 { 00:09:27.580 "name": "BaseBdev1", 00:09:27.580 "uuid": "73f8510c-9a6b-5e42-b745-cbba140da0f0", 00:09:27.580 "is_configured": true, 00:09:27.580 "data_offset": 2048, 00:09:27.580 "data_size": 63488 00:09:27.580 }, 00:09:27.580 { 00:09:27.580 "name": "BaseBdev2", 00:09:27.580 "uuid": "76f38f33-89f5-5da2-a361-9bfb73e150be", 00:09:27.580 "is_configured": true, 00:09:27.580 "data_offset": 2048, 00:09:27.580 "data_size": 63488 00:09:27.580 }, 00:09:27.580 { 00:09:27.580 "name": "BaseBdev3", 00:09:27.580 "uuid": "44cd4bb4-54a4-5217-a3d5-2acc0cc85f8c", 00:09:27.580 "is_configured": true, 00:09:27.580 "data_offset": 2048, 00:09:27.580 "data_size": 63488 00:09:27.580 } 00:09:27.580 ] 00:09:27.580 }' 00:09:27.580 03:18:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.580 03:18:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.839 03:18:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:27.839 03:18:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.839 03:18:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.839 [2024-11-21 03:18:15.250849] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:27.839 [2024-11-21 03:18:15.250922] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:27.839 [2024-11-21 03:18:15.254283] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:27.839 [2024-11-21 03:18:15.254369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:27.839 [2024-11-21 03:18:15.254427] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:27.839 { 00:09:27.839 "results": [ 00:09:27.839 { 00:09:27.839 "job": "raid_bdev1", 00:09:27.839 "core_mask": "0x1", 00:09:27.839 "workload": "randrw", 00:09:27.839 "percentage": 50, 00:09:27.839 "status": "finished", 00:09:27.839 "queue_depth": 1, 00:09:27.839 "io_size": 131072, 00:09:27.839 "runtime": 1.271547, 00:09:27.839 "iops": 11054.251238845281, 00:09:27.839 "mibps": 1381.7814048556602, 00:09:27.839 "io_failed": 1, 00:09:27.839 "io_timeout": 0, 00:09:27.839 "avg_latency_us": 127.06304717043353, 00:09:27.839 "min_latency_us": 31.01542752549464, 00:09:27.839 "max_latency_us": 1770.7800923908308 00:09:27.839 } 00:09:27.839 ], 00:09:27.839 "core_count": 1 00:09:27.839 } 00:09:27.839 [2024-11-21 03:18:15.254441] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:27.840 03:18:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.840 03:18:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78628 00:09:27.840 03:18:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 78628 ']' 00:09:27.840 03:18:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 78628 00:09:27.840 03:18:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:27.840 03:18:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:27.840 03:18:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78628 00:09:27.840 03:18:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:27.840 03:18:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:27.840 killing process with pid 78628 00:09:27.840 03:18:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78628' 00:09:27.840 03:18:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 78628 00:09:27.840 03:18:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 78628 00:09:27.840 [2024-11-21 03:18:15.285681] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:27.840 [2024-11-21 03:18:15.341250] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:28.406 03:18:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.AnZOVLLgOA 00:09:28.406 03:18:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:28.406 03:18:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:28.406 03:18:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.79 00:09:28.406 03:18:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:28.406 03:18:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:28.406 03:18:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:28.406 03:18:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.79 != \0\.\0\0 ]] 00:09:28.406 00:09:28.406 real 0m3.444s 00:09:28.406 user 0m4.218s 00:09:28.406 sys 0m0.574s 00:09:28.406 03:18:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.406 03:18:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.406 ************************************ 00:09:28.406 END TEST raid_write_error_test 00:09:28.406 ************************************ 00:09:28.406 03:18:15 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:28.406 03:18:15 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:28.406 03:18:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:28.406 03:18:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.406 03:18:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:28.406 ************************************ 00:09:28.406 START TEST raid_state_function_test 00:09:28.406 ************************************ 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=78761 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:28.406 Process raid pid: 78761 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78761' 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 78761 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 78761 ']' 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.406 03:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.406 [2024-11-21 03:18:15.842630] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:09:28.406 [2024-11-21 03:18:15.843443] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:28.666 [2024-11-21 03:18:15.999146] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:28.666 [2024-11-21 03:18:16.023154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.666 [2024-11-21 03:18:16.083541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.666 [2024-11-21 03:18:16.167095] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.666 [2024-11-21 03:18:16.167138] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.235 03:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.235 03:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:29.235 03:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:29.235 03:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.235 03:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.235 [2024-11-21 03:18:16.792204] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:29.235 [2024-11-21 03:18:16.792552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:29.235 [2024-11-21 03:18:16.792594] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:29.235 [2024-11-21 03:18:16.792694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:29.235 [2024-11-21 03:18:16.792726] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:29.235 [2024-11-21 03:18:16.792805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:29.235 03:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.235 03:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:29.235 03:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.235 03:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.235 03:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.235 03:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.235 03:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.235 03:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.235 03:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.235 03:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.235 03:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.495 03:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.495 03:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.495 03:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.495 03:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.495 03:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.495 03:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.495 "name": "Existed_Raid", 00:09:29.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.495 "strip_size_kb": 64, 00:09:29.495 "state": "configuring", 00:09:29.495 "raid_level": "concat", 00:09:29.495 "superblock": false, 00:09:29.495 "num_base_bdevs": 3, 00:09:29.495 "num_base_bdevs_discovered": 0, 00:09:29.495 "num_base_bdevs_operational": 3, 00:09:29.495 "base_bdevs_list": [ 00:09:29.495 { 00:09:29.495 "name": "BaseBdev1", 00:09:29.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.495 "is_configured": false, 00:09:29.495 "data_offset": 0, 00:09:29.495 "data_size": 0 00:09:29.495 }, 00:09:29.495 { 00:09:29.495 "name": "BaseBdev2", 00:09:29.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.495 "is_configured": false, 00:09:29.495 "data_offset": 0, 00:09:29.495 "data_size": 0 00:09:29.495 }, 00:09:29.495 { 00:09:29.495 "name": "BaseBdev3", 00:09:29.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.495 "is_configured": false, 00:09:29.495 "data_offset": 0, 00:09:29.495 "data_size": 0 00:09:29.495 } 00:09:29.495 ] 00:09:29.495 }' 00:09:29.495 03:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.495 03:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.755 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:29.755 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.755 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.755 [2024-11-21 03:18:17.292161] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:29.755 [2024-11-21 03:18:17.292216] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:09:29.755 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.755 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:29.755 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.755 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.755 [2024-11-21 03:18:17.300198] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:29.755 [2024-11-21 03:18:17.300630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:29.755 [2024-11-21 03:18:17.300662] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:29.755 [2024-11-21 03:18:17.300742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:29.755 [2024-11-21 03:18:17.300765] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:29.755 [2024-11-21 03:18:17.300817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:29.755 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.755 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:29.755 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.755 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.755 [2024-11-21 03:18:17.318399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:29.755 BaseBdev1 00:09:30.014 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.014 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:30.014 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:30.014 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:30.014 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:30.014 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:30.014 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:30.014 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:30.014 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.014 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.014 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.014 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:30.014 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.014 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.014 [ 00:09:30.014 { 00:09:30.014 "name": "BaseBdev1", 00:09:30.014 "aliases": [ 00:09:30.014 "479a5456-886f-4590-892a-2b506477c02d" 00:09:30.014 ], 00:09:30.014 "product_name": "Malloc disk", 00:09:30.014 "block_size": 512, 00:09:30.014 "num_blocks": 65536, 00:09:30.014 "uuid": "479a5456-886f-4590-892a-2b506477c02d", 00:09:30.014 "assigned_rate_limits": { 00:09:30.014 "rw_ios_per_sec": 0, 00:09:30.014 "rw_mbytes_per_sec": 0, 00:09:30.014 "r_mbytes_per_sec": 0, 00:09:30.014 "w_mbytes_per_sec": 0 00:09:30.014 }, 00:09:30.014 "claimed": true, 00:09:30.014 "claim_type": "exclusive_write", 00:09:30.014 "zoned": false, 00:09:30.014 "supported_io_types": { 00:09:30.014 "read": true, 00:09:30.014 "write": true, 00:09:30.014 "unmap": true, 00:09:30.014 "flush": true, 00:09:30.014 "reset": true, 00:09:30.014 "nvme_admin": false, 00:09:30.014 "nvme_io": false, 00:09:30.014 "nvme_io_md": false, 00:09:30.014 "write_zeroes": true, 00:09:30.014 "zcopy": true, 00:09:30.014 "get_zone_info": false, 00:09:30.014 "zone_management": false, 00:09:30.014 "zone_append": false, 00:09:30.014 "compare": false, 00:09:30.014 "compare_and_write": false, 00:09:30.014 "abort": true, 00:09:30.014 "seek_hole": false, 00:09:30.014 "seek_data": false, 00:09:30.014 "copy": true, 00:09:30.014 "nvme_iov_md": false 00:09:30.014 }, 00:09:30.014 "memory_domains": [ 00:09:30.014 { 00:09:30.014 "dma_device_id": "system", 00:09:30.014 "dma_device_type": 1 00:09:30.014 }, 00:09:30.014 { 00:09:30.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.014 "dma_device_type": 2 00:09:30.014 } 00:09:30.014 ], 00:09:30.014 "driver_specific": {} 00:09:30.014 } 00:09:30.014 ] 00:09:30.014 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.014 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:30.014 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:30.014 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.014 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.014 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.014 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.014 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.014 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.015 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.015 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.015 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.015 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.015 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.015 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.015 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.015 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.015 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.015 "name": "Existed_Raid", 00:09:30.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.015 "strip_size_kb": 64, 00:09:30.015 "state": "configuring", 00:09:30.015 "raid_level": "concat", 00:09:30.015 "superblock": false, 00:09:30.015 "num_base_bdevs": 3, 00:09:30.015 "num_base_bdevs_discovered": 1, 00:09:30.015 "num_base_bdevs_operational": 3, 00:09:30.015 "base_bdevs_list": [ 00:09:30.015 { 00:09:30.015 "name": "BaseBdev1", 00:09:30.015 "uuid": "479a5456-886f-4590-892a-2b506477c02d", 00:09:30.015 "is_configured": true, 00:09:30.015 "data_offset": 0, 00:09:30.015 "data_size": 65536 00:09:30.015 }, 00:09:30.015 { 00:09:30.015 "name": "BaseBdev2", 00:09:30.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.015 "is_configured": false, 00:09:30.015 "data_offset": 0, 00:09:30.015 "data_size": 0 00:09:30.015 }, 00:09:30.015 { 00:09:30.015 "name": "BaseBdev3", 00:09:30.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.015 "is_configured": false, 00:09:30.015 "data_offset": 0, 00:09:30.015 "data_size": 0 00:09:30.015 } 00:09:30.015 ] 00:09:30.015 }' 00:09:30.015 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.015 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.273 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:30.273 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.273 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.273 [2024-11-21 03:18:17.826595] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:30.273 [2024-11-21 03:18:17.826691] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:30.273 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.273 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:30.273 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.273 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.273 [2024-11-21 03:18:17.834758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:30.531 [2024-11-21 03:18:17.837146] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:30.531 [2024-11-21 03:18:17.837520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:30.531 [2024-11-21 03:18:17.837553] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:30.531 [2024-11-21 03:18:17.837624] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:30.531 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.531 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:30.531 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:30.531 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:30.531 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.531 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.531 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.531 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.531 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.531 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.531 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.531 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.531 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.531 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.531 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.531 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.531 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.531 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.531 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.531 "name": "Existed_Raid", 00:09:30.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.531 "strip_size_kb": 64, 00:09:30.531 "state": "configuring", 00:09:30.531 "raid_level": "concat", 00:09:30.531 "superblock": false, 00:09:30.531 "num_base_bdevs": 3, 00:09:30.531 "num_base_bdevs_discovered": 1, 00:09:30.531 "num_base_bdevs_operational": 3, 00:09:30.531 "base_bdevs_list": [ 00:09:30.531 { 00:09:30.531 "name": "BaseBdev1", 00:09:30.531 "uuid": "479a5456-886f-4590-892a-2b506477c02d", 00:09:30.531 "is_configured": true, 00:09:30.531 "data_offset": 0, 00:09:30.531 "data_size": 65536 00:09:30.531 }, 00:09:30.531 { 00:09:30.531 "name": "BaseBdev2", 00:09:30.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.531 "is_configured": false, 00:09:30.531 "data_offset": 0, 00:09:30.531 "data_size": 0 00:09:30.531 }, 00:09:30.531 { 00:09:30.531 "name": "BaseBdev3", 00:09:30.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.531 "is_configured": false, 00:09:30.531 "data_offset": 0, 00:09:30.531 "data_size": 0 00:09:30.531 } 00:09:30.531 ] 00:09:30.531 }' 00:09:30.531 03:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.531 03:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.790 [2024-11-21 03:18:18.298001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:30.790 BaseBdev2 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.790 [ 00:09:30.790 { 00:09:30.790 "name": "BaseBdev2", 00:09:30.790 "aliases": [ 00:09:30.790 "bf14624e-aaa2-4b59-af16-c8e545414900" 00:09:30.790 ], 00:09:30.790 "product_name": "Malloc disk", 00:09:30.790 "block_size": 512, 00:09:30.790 "num_blocks": 65536, 00:09:30.790 "uuid": "bf14624e-aaa2-4b59-af16-c8e545414900", 00:09:30.790 "assigned_rate_limits": { 00:09:30.790 "rw_ios_per_sec": 0, 00:09:30.790 "rw_mbytes_per_sec": 0, 00:09:30.790 "r_mbytes_per_sec": 0, 00:09:30.790 "w_mbytes_per_sec": 0 00:09:30.790 }, 00:09:30.790 "claimed": true, 00:09:30.790 "claim_type": "exclusive_write", 00:09:30.790 "zoned": false, 00:09:30.790 "supported_io_types": { 00:09:30.790 "read": true, 00:09:30.790 "write": true, 00:09:30.790 "unmap": true, 00:09:30.790 "flush": true, 00:09:30.790 "reset": true, 00:09:30.790 "nvme_admin": false, 00:09:30.790 "nvme_io": false, 00:09:30.790 "nvme_io_md": false, 00:09:30.790 "write_zeroes": true, 00:09:30.790 "zcopy": true, 00:09:30.790 "get_zone_info": false, 00:09:30.790 "zone_management": false, 00:09:30.790 "zone_append": false, 00:09:30.790 "compare": false, 00:09:30.790 "compare_and_write": false, 00:09:30.790 "abort": true, 00:09:30.790 "seek_hole": false, 00:09:30.790 "seek_data": false, 00:09:30.790 "copy": true, 00:09:30.790 "nvme_iov_md": false 00:09:30.790 }, 00:09:30.790 "memory_domains": [ 00:09:30.790 { 00:09:30.790 "dma_device_id": "system", 00:09:30.790 "dma_device_type": 1 00:09:30.790 }, 00:09:30.790 { 00:09:30.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.790 "dma_device_type": 2 00:09:30.790 } 00:09:30.790 ], 00:09:30.790 "driver_specific": {} 00:09:30.790 } 00:09:30.790 ] 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.790 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.048 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.048 "name": "Existed_Raid", 00:09:31.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.048 "strip_size_kb": 64, 00:09:31.048 "state": "configuring", 00:09:31.048 "raid_level": "concat", 00:09:31.048 "superblock": false, 00:09:31.048 "num_base_bdevs": 3, 00:09:31.048 "num_base_bdevs_discovered": 2, 00:09:31.048 "num_base_bdevs_operational": 3, 00:09:31.048 "base_bdevs_list": [ 00:09:31.048 { 00:09:31.048 "name": "BaseBdev1", 00:09:31.048 "uuid": "479a5456-886f-4590-892a-2b506477c02d", 00:09:31.048 "is_configured": true, 00:09:31.048 "data_offset": 0, 00:09:31.048 "data_size": 65536 00:09:31.048 }, 00:09:31.048 { 00:09:31.048 "name": "BaseBdev2", 00:09:31.048 "uuid": "bf14624e-aaa2-4b59-af16-c8e545414900", 00:09:31.048 "is_configured": true, 00:09:31.048 "data_offset": 0, 00:09:31.048 "data_size": 65536 00:09:31.048 }, 00:09:31.048 { 00:09:31.048 "name": "BaseBdev3", 00:09:31.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.048 "is_configured": false, 00:09:31.048 "data_offset": 0, 00:09:31.048 "data_size": 0 00:09:31.048 } 00:09:31.048 ] 00:09:31.048 }' 00:09:31.048 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.048 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.306 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:31.306 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.306 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.306 [2024-11-21 03:18:18.826652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:31.306 [2024-11-21 03:18:18.826714] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:31.306 [2024-11-21 03:18:18.826726] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:31.306 [2024-11-21 03:18:18.827126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:31.306 [2024-11-21 03:18:18.827328] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:31.306 [2024-11-21 03:18:18.827362] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:09:31.306 [2024-11-21 03:18:18.827634] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.306 BaseBdev3 00:09:31.306 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.306 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:31.306 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:31.306 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:31.306 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:31.306 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:31.306 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:31.306 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:31.306 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.306 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.306 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.306 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:31.306 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.307 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.307 [ 00:09:31.307 { 00:09:31.307 "name": "BaseBdev3", 00:09:31.307 "aliases": [ 00:09:31.307 "0c3cd95f-08d6-46c5-af39-b24d31c3b430" 00:09:31.307 ], 00:09:31.307 "product_name": "Malloc disk", 00:09:31.307 "block_size": 512, 00:09:31.307 "num_blocks": 65536, 00:09:31.307 "uuid": "0c3cd95f-08d6-46c5-af39-b24d31c3b430", 00:09:31.307 "assigned_rate_limits": { 00:09:31.307 "rw_ios_per_sec": 0, 00:09:31.307 "rw_mbytes_per_sec": 0, 00:09:31.307 "r_mbytes_per_sec": 0, 00:09:31.307 "w_mbytes_per_sec": 0 00:09:31.307 }, 00:09:31.307 "claimed": true, 00:09:31.307 "claim_type": "exclusive_write", 00:09:31.307 "zoned": false, 00:09:31.307 "supported_io_types": { 00:09:31.307 "read": true, 00:09:31.307 "write": true, 00:09:31.307 "unmap": true, 00:09:31.307 "flush": true, 00:09:31.307 "reset": true, 00:09:31.307 "nvme_admin": false, 00:09:31.307 "nvme_io": false, 00:09:31.307 "nvme_io_md": false, 00:09:31.307 "write_zeroes": true, 00:09:31.307 "zcopy": true, 00:09:31.307 "get_zone_info": false, 00:09:31.307 "zone_management": false, 00:09:31.307 "zone_append": false, 00:09:31.307 "compare": false, 00:09:31.307 "compare_and_write": false, 00:09:31.307 "abort": true, 00:09:31.307 "seek_hole": false, 00:09:31.307 "seek_data": false, 00:09:31.307 "copy": true, 00:09:31.307 "nvme_iov_md": false 00:09:31.307 }, 00:09:31.307 "memory_domains": [ 00:09:31.307 { 00:09:31.307 "dma_device_id": "system", 00:09:31.307 "dma_device_type": 1 00:09:31.307 }, 00:09:31.307 { 00:09:31.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.307 "dma_device_type": 2 00:09:31.307 } 00:09:31.307 ], 00:09:31.307 "driver_specific": {} 00:09:31.307 } 00:09:31.307 ] 00:09:31.307 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.307 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:31.307 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:31.307 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:31.307 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:31.307 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.307 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:31.307 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.307 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.307 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.307 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.307 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.307 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.307 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.307 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.307 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.307 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.307 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.565 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.565 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.565 "name": "Existed_Raid", 00:09:31.565 "uuid": "0da6b9b0-311c-4dc3-91a7-e6b7eb490aef", 00:09:31.565 "strip_size_kb": 64, 00:09:31.565 "state": "online", 00:09:31.565 "raid_level": "concat", 00:09:31.565 "superblock": false, 00:09:31.565 "num_base_bdevs": 3, 00:09:31.565 "num_base_bdevs_discovered": 3, 00:09:31.565 "num_base_bdevs_operational": 3, 00:09:31.565 "base_bdevs_list": [ 00:09:31.565 { 00:09:31.565 "name": "BaseBdev1", 00:09:31.565 "uuid": "479a5456-886f-4590-892a-2b506477c02d", 00:09:31.565 "is_configured": true, 00:09:31.565 "data_offset": 0, 00:09:31.565 "data_size": 65536 00:09:31.565 }, 00:09:31.565 { 00:09:31.565 "name": "BaseBdev2", 00:09:31.565 "uuid": "bf14624e-aaa2-4b59-af16-c8e545414900", 00:09:31.565 "is_configured": true, 00:09:31.565 "data_offset": 0, 00:09:31.565 "data_size": 65536 00:09:31.565 }, 00:09:31.565 { 00:09:31.565 "name": "BaseBdev3", 00:09:31.565 "uuid": "0c3cd95f-08d6-46c5-af39-b24d31c3b430", 00:09:31.565 "is_configured": true, 00:09:31.565 "data_offset": 0, 00:09:31.565 "data_size": 65536 00:09:31.565 } 00:09:31.565 ] 00:09:31.565 }' 00:09:31.565 03:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.565 03:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.824 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:31.824 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:31.824 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:31.824 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:31.824 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:31.824 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:31.824 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:31.824 03:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.824 03:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.824 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:31.824 [2024-11-21 03:18:19.287226] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:31.824 03:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.824 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:31.824 "name": "Existed_Raid", 00:09:31.824 "aliases": [ 00:09:31.824 "0da6b9b0-311c-4dc3-91a7-e6b7eb490aef" 00:09:31.824 ], 00:09:31.824 "product_name": "Raid Volume", 00:09:31.824 "block_size": 512, 00:09:31.824 "num_blocks": 196608, 00:09:31.824 "uuid": "0da6b9b0-311c-4dc3-91a7-e6b7eb490aef", 00:09:31.824 "assigned_rate_limits": { 00:09:31.824 "rw_ios_per_sec": 0, 00:09:31.824 "rw_mbytes_per_sec": 0, 00:09:31.824 "r_mbytes_per_sec": 0, 00:09:31.824 "w_mbytes_per_sec": 0 00:09:31.824 }, 00:09:31.824 "claimed": false, 00:09:31.824 "zoned": false, 00:09:31.824 "supported_io_types": { 00:09:31.824 "read": true, 00:09:31.824 "write": true, 00:09:31.824 "unmap": true, 00:09:31.824 "flush": true, 00:09:31.824 "reset": true, 00:09:31.824 "nvme_admin": false, 00:09:31.824 "nvme_io": false, 00:09:31.824 "nvme_io_md": false, 00:09:31.824 "write_zeroes": true, 00:09:31.824 "zcopy": false, 00:09:31.825 "get_zone_info": false, 00:09:31.825 "zone_management": false, 00:09:31.825 "zone_append": false, 00:09:31.825 "compare": false, 00:09:31.825 "compare_and_write": false, 00:09:31.825 "abort": false, 00:09:31.825 "seek_hole": false, 00:09:31.825 "seek_data": false, 00:09:31.825 "copy": false, 00:09:31.825 "nvme_iov_md": false 00:09:31.825 }, 00:09:31.825 "memory_domains": [ 00:09:31.825 { 00:09:31.825 "dma_device_id": "system", 00:09:31.825 "dma_device_type": 1 00:09:31.825 }, 00:09:31.825 { 00:09:31.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.825 "dma_device_type": 2 00:09:31.825 }, 00:09:31.825 { 00:09:31.825 "dma_device_id": "system", 00:09:31.825 "dma_device_type": 1 00:09:31.825 }, 00:09:31.825 { 00:09:31.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.825 "dma_device_type": 2 00:09:31.825 }, 00:09:31.825 { 00:09:31.825 "dma_device_id": "system", 00:09:31.825 "dma_device_type": 1 00:09:31.825 }, 00:09:31.825 { 00:09:31.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.825 "dma_device_type": 2 00:09:31.825 } 00:09:31.825 ], 00:09:31.825 "driver_specific": { 00:09:31.825 "raid": { 00:09:31.825 "uuid": "0da6b9b0-311c-4dc3-91a7-e6b7eb490aef", 00:09:31.825 "strip_size_kb": 64, 00:09:31.825 "state": "online", 00:09:31.825 "raid_level": "concat", 00:09:31.825 "superblock": false, 00:09:31.825 "num_base_bdevs": 3, 00:09:31.825 "num_base_bdevs_discovered": 3, 00:09:31.825 "num_base_bdevs_operational": 3, 00:09:31.825 "base_bdevs_list": [ 00:09:31.825 { 00:09:31.825 "name": "BaseBdev1", 00:09:31.825 "uuid": "479a5456-886f-4590-892a-2b506477c02d", 00:09:31.825 "is_configured": true, 00:09:31.825 "data_offset": 0, 00:09:31.825 "data_size": 65536 00:09:31.825 }, 00:09:31.825 { 00:09:31.825 "name": "BaseBdev2", 00:09:31.825 "uuid": "bf14624e-aaa2-4b59-af16-c8e545414900", 00:09:31.825 "is_configured": true, 00:09:31.825 "data_offset": 0, 00:09:31.825 "data_size": 65536 00:09:31.825 }, 00:09:31.825 { 00:09:31.825 "name": "BaseBdev3", 00:09:31.825 "uuid": "0c3cd95f-08d6-46c5-af39-b24d31c3b430", 00:09:31.825 "is_configured": true, 00:09:31.825 "data_offset": 0, 00:09:31.825 "data_size": 65536 00:09:31.825 } 00:09:31.825 ] 00:09:31.825 } 00:09:31.825 } 00:09:31.825 }' 00:09:31.825 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:31.825 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:31.825 BaseBdev2 00:09:31.825 BaseBdev3' 00:09:31.825 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.083 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:32.083 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.083 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:32.083 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.083 03:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.083 03:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.083 03:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.083 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.083 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.083 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.083 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.083 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:32.083 03:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.083 03:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.083 03:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.083 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.083 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.083 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.083 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:32.083 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.083 03:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.083 03:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.083 03:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.083 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.083 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.083 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:32.083 03:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.083 03:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.083 [2024-11-21 03:18:19.523023] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:32.083 [2024-11-21 03:18:19.523080] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:32.083 [2024-11-21 03:18:19.523150] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:32.083 03:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.084 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:32.084 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:32.084 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:32.084 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:32.084 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:32.084 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:32.084 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.084 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:32.084 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.084 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.084 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:32.084 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.084 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.084 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.084 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.084 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.084 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.084 03:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.084 03:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.084 03:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.084 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.084 "name": "Existed_Raid", 00:09:32.084 "uuid": "0da6b9b0-311c-4dc3-91a7-e6b7eb490aef", 00:09:32.084 "strip_size_kb": 64, 00:09:32.084 "state": "offline", 00:09:32.084 "raid_level": "concat", 00:09:32.084 "superblock": false, 00:09:32.084 "num_base_bdevs": 3, 00:09:32.084 "num_base_bdevs_discovered": 2, 00:09:32.084 "num_base_bdevs_operational": 2, 00:09:32.084 "base_bdevs_list": [ 00:09:32.084 { 00:09:32.084 "name": null, 00:09:32.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.084 "is_configured": false, 00:09:32.084 "data_offset": 0, 00:09:32.084 "data_size": 65536 00:09:32.084 }, 00:09:32.084 { 00:09:32.084 "name": "BaseBdev2", 00:09:32.084 "uuid": "bf14624e-aaa2-4b59-af16-c8e545414900", 00:09:32.084 "is_configured": true, 00:09:32.084 "data_offset": 0, 00:09:32.084 "data_size": 65536 00:09:32.084 }, 00:09:32.084 { 00:09:32.084 "name": "BaseBdev3", 00:09:32.084 "uuid": "0c3cd95f-08d6-46c5-af39-b24d31c3b430", 00:09:32.084 "is_configured": true, 00:09:32.084 "data_offset": 0, 00:09:32.084 "data_size": 65536 00:09:32.084 } 00:09:32.084 ] 00:09:32.084 }' 00:09:32.084 03:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.084 03:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.668 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:32.668 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:32.668 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.668 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.668 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.668 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:32.668 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.668 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:32.668 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:32.668 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:32.668 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.668 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.668 [2024-11-21 03:18:20.063170] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:32.668 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.668 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:32.668 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:32.668 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.668 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:32.668 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.668 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.668 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.668 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:32.668 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:32.669 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:32.669 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.669 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.669 [2024-11-21 03:18:20.130967] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:32.669 [2024-11-21 03:18:20.131050] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:09:32.669 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.669 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:32.669 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:32.669 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.669 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.669 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.669 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:32.669 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.669 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:32.669 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:32.669 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:32.669 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:32.669 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:32.669 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:32.669 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.669 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.669 BaseBdev2 00:09:32.669 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.669 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:32.669 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:32.669 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:32.669 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:32.669 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:32.669 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:32.669 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:32.669 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.669 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.669 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.669 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:32.669 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.669 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.927 [ 00:09:32.928 { 00:09:32.928 "name": "BaseBdev2", 00:09:32.928 "aliases": [ 00:09:32.928 "14ed0768-cf31-4e13-b7a4-937be6fd11f6" 00:09:32.928 ], 00:09:32.928 "product_name": "Malloc disk", 00:09:32.928 "block_size": 512, 00:09:32.928 "num_blocks": 65536, 00:09:32.928 "uuid": "14ed0768-cf31-4e13-b7a4-937be6fd11f6", 00:09:32.928 "assigned_rate_limits": { 00:09:32.928 "rw_ios_per_sec": 0, 00:09:32.928 "rw_mbytes_per_sec": 0, 00:09:32.928 "r_mbytes_per_sec": 0, 00:09:32.928 "w_mbytes_per_sec": 0 00:09:32.928 }, 00:09:32.928 "claimed": false, 00:09:32.928 "zoned": false, 00:09:32.928 "supported_io_types": { 00:09:32.928 "read": true, 00:09:32.928 "write": true, 00:09:32.928 "unmap": true, 00:09:32.928 "flush": true, 00:09:32.928 "reset": true, 00:09:32.928 "nvme_admin": false, 00:09:32.928 "nvme_io": false, 00:09:32.928 "nvme_io_md": false, 00:09:32.928 "write_zeroes": true, 00:09:32.928 "zcopy": true, 00:09:32.928 "get_zone_info": false, 00:09:32.928 "zone_management": false, 00:09:32.928 "zone_append": false, 00:09:32.928 "compare": false, 00:09:32.928 "compare_and_write": false, 00:09:32.928 "abort": true, 00:09:32.928 "seek_hole": false, 00:09:32.928 "seek_data": false, 00:09:32.928 "copy": true, 00:09:32.928 "nvme_iov_md": false 00:09:32.928 }, 00:09:32.928 "memory_domains": [ 00:09:32.928 { 00:09:32.928 "dma_device_id": "system", 00:09:32.928 "dma_device_type": 1 00:09:32.928 }, 00:09:32.928 { 00:09:32.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.928 "dma_device_type": 2 00:09:32.928 } 00:09:32.928 ], 00:09:32.928 "driver_specific": {} 00:09:32.928 } 00:09:32.928 ] 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.928 BaseBdev3 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.928 [ 00:09:32.928 { 00:09:32.928 "name": "BaseBdev3", 00:09:32.928 "aliases": [ 00:09:32.928 "ea5ffef8-75fc-491c-a0ce-5680dc22bba0" 00:09:32.928 ], 00:09:32.928 "product_name": "Malloc disk", 00:09:32.928 "block_size": 512, 00:09:32.928 "num_blocks": 65536, 00:09:32.928 "uuid": "ea5ffef8-75fc-491c-a0ce-5680dc22bba0", 00:09:32.928 "assigned_rate_limits": { 00:09:32.928 "rw_ios_per_sec": 0, 00:09:32.928 "rw_mbytes_per_sec": 0, 00:09:32.928 "r_mbytes_per_sec": 0, 00:09:32.928 "w_mbytes_per_sec": 0 00:09:32.928 }, 00:09:32.928 "claimed": false, 00:09:32.928 "zoned": false, 00:09:32.928 "supported_io_types": { 00:09:32.928 "read": true, 00:09:32.928 "write": true, 00:09:32.928 "unmap": true, 00:09:32.928 "flush": true, 00:09:32.928 "reset": true, 00:09:32.928 "nvme_admin": false, 00:09:32.928 "nvme_io": false, 00:09:32.928 "nvme_io_md": false, 00:09:32.928 "write_zeroes": true, 00:09:32.928 "zcopy": true, 00:09:32.928 "get_zone_info": false, 00:09:32.928 "zone_management": false, 00:09:32.928 "zone_append": false, 00:09:32.928 "compare": false, 00:09:32.928 "compare_and_write": false, 00:09:32.928 "abort": true, 00:09:32.928 "seek_hole": false, 00:09:32.928 "seek_data": false, 00:09:32.928 "copy": true, 00:09:32.928 "nvme_iov_md": false 00:09:32.928 }, 00:09:32.928 "memory_domains": [ 00:09:32.928 { 00:09:32.928 "dma_device_id": "system", 00:09:32.928 "dma_device_type": 1 00:09:32.928 }, 00:09:32.928 { 00:09:32.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.928 "dma_device_type": 2 00:09:32.928 } 00:09:32.928 ], 00:09:32.928 "driver_specific": {} 00:09:32.928 } 00:09:32.928 ] 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.928 [2024-11-21 03:18:20.294214] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:32.928 [2024-11-21 03:18:20.294653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:32.928 [2024-11-21 03:18:20.294698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:32.928 [2024-11-21 03:18:20.296974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.928 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.929 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.929 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.929 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.929 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.929 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.929 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.929 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.929 "name": "Existed_Raid", 00:09:32.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.929 "strip_size_kb": 64, 00:09:32.929 "state": "configuring", 00:09:32.929 "raid_level": "concat", 00:09:32.929 "superblock": false, 00:09:32.929 "num_base_bdevs": 3, 00:09:32.929 "num_base_bdevs_discovered": 2, 00:09:32.929 "num_base_bdevs_operational": 3, 00:09:32.929 "base_bdevs_list": [ 00:09:32.929 { 00:09:32.929 "name": "BaseBdev1", 00:09:32.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.929 "is_configured": false, 00:09:32.929 "data_offset": 0, 00:09:32.929 "data_size": 0 00:09:32.929 }, 00:09:32.929 { 00:09:32.929 "name": "BaseBdev2", 00:09:32.929 "uuid": "14ed0768-cf31-4e13-b7a4-937be6fd11f6", 00:09:32.929 "is_configured": true, 00:09:32.929 "data_offset": 0, 00:09:32.929 "data_size": 65536 00:09:32.929 }, 00:09:32.929 { 00:09:32.929 "name": "BaseBdev3", 00:09:32.929 "uuid": "ea5ffef8-75fc-491c-a0ce-5680dc22bba0", 00:09:32.929 "is_configured": true, 00:09:32.929 "data_offset": 0, 00:09:32.929 "data_size": 65536 00:09:32.929 } 00:09:32.929 ] 00:09:32.929 }' 00:09:32.929 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.929 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.187 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:33.188 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.188 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.188 [2024-11-21 03:18:20.714330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:33.188 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.188 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:33.188 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.188 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.188 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.188 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.188 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.188 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.188 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.188 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.188 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.188 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.188 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.188 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.188 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.188 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.447 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.447 "name": "Existed_Raid", 00:09:33.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.447 "strip_size_kb": 64, 00:09:33.447 "state": "configuring", 00:09:33.447 "raid_level": "concat", 00:09:33.447 "superblock": false, 00:09:33.447 "num_base_bdevs": 3, 00:09:33.447 "num_base_bdevs_discovered": 1, 00:09:33.447 "num_base_bdevs_operational": 3, 00:09:33.447 "base_bdevs_list": [ 00:09:33.447 { 00:09:33.447 "name": "BaseBdev1", 00:09:33.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.447 "is_configured": false, 00:09:33.448 "data_offset": 0, 00:09:33.448 "data_size": 0 00:09:33.448 }, 00:09:33.448 { 00:09:33.448 "name": null, 00:09:33.448 "uuid": "14ed0768-cf31-4e13-b7a4-937be6fd11f6", 00:09:33.448 "is_configured": false, 00:09:33.448 "data_offset": 0, 00:09:33.448 "data_size": 65536 00:09:33.448 }, 00:09:33.448 { 00:09:33.448 "name": "BaseBdev3", 00:09:33.448 "uuid": "ea5ffef8-75fc-491c-a0ce-5680dc22bba0", 00:09:33.448 "is_configured": true, 00:09:33.448 "data_offset": 0, 00:09:33.448 "data_size": 65536 00:09:33.448 } 00:09:33.448 ] 00:09:33.448 }' 00:09:33.448 03:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.448 03:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.706 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:33.706 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.706 03:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.706 03:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.706 03:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.707 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:33.707 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:33.707 03:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.707 03:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.707 [2024-11-21 03:18:21.241812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.707 BaseBdev1 00:09:33.707 03:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.707 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:33.707 03:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:33.707 03:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:33.707 03:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:33.707 03:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:33.707 03:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:33.707 03:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:33.707 03:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.707 03:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.707 03:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.707 03:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:33.707 03:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.707 03:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.707 [ 00:09:33.707 { 00:09:33.707 "name": "BaseBdev1", 00:09:33.707 "aliases": [ 00:09:33.707 "02bc811d-35b8-4335-ba99-78535cb9f776" 00:09:33.707 ], 00:09:33.707 "product_name": "Malloc disk", 00:09:33.707 "block_size": 512, 00:09:33.707 "num_blocks": 65536, 00:09:33.707 "uuid": "02bc811d-35b8-4335-ba99-78535cb9f776", 00:09:33.707 "assigned_rate_limits": { 00:09:33.707 "rw_ios_per_sec": 0, 00:09:33.707 "rw_mbytes_per_sec": 0, 00:09:33.707 "r_mbytes_per_sec": 0, 00:09:33.707 "w_mbytes_per_sec": 0 00:09:33.707 }, 00:09:33.707 "claimed": true, 00:09:33.707 "claim_type": "exclusive_write", 00:09:33.966 "zoned": false, 00:09:33.966 "supported_io_types": { 00:09:33.966 "read": true, 00:09:33.966 "write": true, 00:09:33.966 "unmap": true, 00:09:33.966 "flush": true, 00:09:33.966 "reset": true, 00:09:33.966 "nvme_admin": false, 00:09:33.966 "nvme_io": false, 00:09:33.966 "nvme_io_md": false, 00:09:33.966 "write_zeroes": true, 00:09:33.966 "zcopy": true, 00:09:33.966 "get_zone_info": false, 00:09:33.966 "zone_management": false, 00:09:33.966 "zone_append": false, 00:09:33.966 "compare": false, 00:09:33.966 "compare_and_write": false, 00:09:33.966 "abort": true, 00:09:33.966 "seek_hole": false, 00:09:33.966 "seek_data": false, 00:09:33.966 "copy": true, 00:09:33.966 "nvme_iov_md": false 00:09:33.966 }, 00:09:33.966 "memory_domains": [ 00:09:33.966 { 00:09:33.966 "dma_device_id": "system", 00:09:33.966 "dma_device_type": 1 00:09:33.966 }, 00:09:33.966 { 00:09:33.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.966 "dma_device_type": 2 00:09:33.966 } 00:09:33.966 ], 00:09:33.966 "driver_specific": {} 00:09:33.966 } 00:09:33.966 ] 00:09:33.966 03:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.966 03:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:33.966 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:33.966 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.966 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.966 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.966 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.966 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.966 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.966 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.966 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.966 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.966 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.966 03:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.966 03:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.966 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.966 03:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.966 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.966 "name": "Existed_Raid", 00:09:33.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.966 "strip_size_kb": 64, 00:09:33.966 "state": "configuring", 00:09:33.966 "raid_level": "concat", 00:09:33.966 "superblock": false, 00:09:33.966 "num_base_bdevs": 3, 00:09:33.966 "num_base_bdevs_discovered": 2, 00:09:33.966 "num_base_bdevs_operational": 3, 00:09:33.966 "base_bdevs_list": [ 00:09:33.966 { 00:09:33.966 "name": "BaseBdev1", 00:09:33.966 "uuid": "02bc811d-35b8-4335-ba99-78535cb9f776", 00:09:33.966 "is_configured": true, 00:09:33.966 "data_offset": 0, 00:09:33.966 "data_size": 65536 00:09:33.966 }, 00:09:33.966 { 00:09:33.966 "name": null, 00:09:33.966 "uuid": "14ed0768-cf31-4e13-b7a4-937be6fd11f6", 00:09:33.966 "is_configured": false, 00:09:33.966 "data_offset": 0, 00:09:33.966 "data_size": 65536 00:09:33.966 }, 00:09:33.966 { 00:09:33.966 "name": "BaseBdev3", 00:09:33.966 "uuid": "ea5ffef8-75fc-491c-a0ce-5680dc22bba0", 00:09:33.966 "is_configured": true, 00:09:33.966 "data_offset": 0, 00:09:33.966 "data_size": 65536 00:09:33.966 } 00:09:33.966 ] 00:09:33.966 }' 00:09:33.966 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.966 03:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.226 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.226 03:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.226 03:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.227 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:34.227 03:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.227 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:34.227 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:34.227 03:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.227 03:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.227 [2024-11-21 03:18:21.770105] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:34.227 03:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.227 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:34.227 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.227 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.227 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.227 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.227 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.227 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.227 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.227 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.227 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.227 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.227 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.227 03:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.227 03:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.486 03:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.486 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.486 "name": "Existed_Raid", 00:09:34.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.486 "strip_size_kb": 64, 00:09:34.486 "state": "configuring", 00:09:34.486 "raid_level": "concat", 00:09:34.486 "superblock": false, 00:09:34.486 "num_base_bdevs": 3, 00:09:34.486 "num_base_bdevs_discovered": 1, 00:09:34.486 "num_base_bdevs_operational": 3, 00:09:34.486 "base_bdevs_list": [ 00:09:34.486 { 00:09:34.486 "name": "BaseBdev1", 00:09:34.486 "uuid": "02bc811d-35b8-4335-ba99-78535cb9f776", 00:09:34.486 "is_configured": true, 00:09:34.486 "data_offset": 0, 00:09:34.486 "data_size": 65536 00:09:34.486 }, 00:09:34.486 { 00:09:34.486 "name": null, 00:09:34.486 "uuid": "14ed0768-cf31-4e13-b7a4-937be6fd11f6", 00:09:34.486 "is_configured": false, 00:09:34.486 "data_offset": 0, 00:09:34.486 "data_size": 65536 00:09:34.486 }, 00:09:34.486 { 00:09:34.486 "name": null, 00:09:34.486 "uuid": "ea5ffef8-75fc-491c-a0ce-5680dc22bba0", 00:09:34.486 "is_configured": false, 00:09:34.486 "data_offset": 0, 00:09:34.486 "data_size": 65536 00:09:34.486 } 00:09:34.486 ] 00:09:34.486 }' 00:09:34.486 03:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.486 03:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.746 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.746 03:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.746 03:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.746 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:34.746 03:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.746 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:34.746 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:34.746 03:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.746 03:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.746 [2024-11-21 03:18:22.286313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:34.746 03:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.746 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:34.746 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.746 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.746 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.746 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.746 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.746 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.746 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.746 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.746 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.746 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.746 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.746 03:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.746 03:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.006 03:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.006 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.006 "name": "Existed_Raid", 00:09:35.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.006 "strip_size_kb": 64, 00:09:35.006 "state": "configuring", 00:09:35.006 "raid_level": "concat", 00:09:35.006 "superblock": false, 00:09:35.006 "num_base_bdevs": 3, 00:09:35.006 "num_base_bdevs_discovered": 2, 00:09:35.006 "num_base_bdevs_operational": 3, 00:09:35.006 "base_bdevs_list": [ 00:09:35.006 { 00:09:35.006 "name": "BaseBdev1", 00:09:35.006 "uuid": "02bc811d-35b8-4335-ba99-78535cb9f776", 00:09:35.006 "is_configured": true, 00:09:35.006 "data_offset": 0, 00:09:35.006 "data_size": 65536 00:09:35.006 }, 00:09:35.006 { 00:09:35.006 "name": null, 00:09:35.006 "uuid": "14ed0768-cf31-4e13-b7a4-937be6fd11f6", 00:09:35.006 "is_configured": false, 00:09:35.006 "data_offset": 0, 00:09:35.006 "data_size": 65536 00:09:35.006 }, 00:09:35.006 { 00:09:35.006 "name": "BaseBdev3", 00:09:35.006 "uuid": "ea5ffef8-75fc-491c-a0ce-5680dc22bba0", 00:09:35.006 "is_configured": true, 00:09:35.006 "data_offset": 0, 00:09:35.006 "data_size": 65536 00:09:35.006 } 00:09:35.006 ] 00:09:35.006 }' 00:09:35.006 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.006 03:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.267 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:35.267 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.267 03:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.267 03:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.267 03:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.267 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:35.267 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:35.267 03:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.267 03:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.267 [2024-11-21 03:18:22.762421] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:35.267 03:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.267 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:35.267 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.267 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.267 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.267 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.267 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.267 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.267 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.267 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.267 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.267 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.267 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.267 03:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.268 03:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.268 03:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.268 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.268 "name": "Existed_Raid", 00:09:35.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.268 "strip_size_kb": 64, 00:09:35.268 "state": "configuring", 00:09:35.268 "raid_level": "concat", 00:09:35.268 "superblock": false, 00:09:35.268 "num_base_bdevs": 3, 00:09:35.268 "num_base_bdevs_discovered": 1, 00:09:35.268 "num_base_bdevs_operational": 3, 00:09:35.268 "base_bdevs_list": [ 00:09:35.268 { 00:09:35.268 "name": null, 00:09:35.268 "uuid": "02bc811d-35b8-4335-ba99-78535cb9f776", 00:09:35.268 "is_configured": false, 00:09:35.268 "data_offset": 0, 00:09:35.268 "data_size": 65536 00:09:35.268 }, 00:09:35.268 { 00:09:35.268 "name": null, 00:09:35.268 "uuid": "14ed0768-cf31-4e13-b7a4-937be6fd11f6", 00:09:35.268 "is_configured": false, 00:09:35.268 "data_offset": 0, 00:09:35.268 "data_size": 65536 00:09:35.268 }, 00:09:35.268 { 00:09:35.268 "name": "BaseBdev3", 00:09:35.268 "uuid": "ea5ffef8-75fc-491c-a0ce-5680dc22bba0", 00:09:35.268 "is_configured": true, 00:09:35.268 "data_offset": 0, 00:09:35.268 "data_size": 65536 00:09:35.268 } 00:09:35.268 ] 00:09:35.268 }' 00:09:35.268 03:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.268 03:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.838 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.838 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:35.838 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.838 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.838 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.838 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:35.838 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:35.838 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.838 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.838 [2024-11-21 03:18:23.281411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:35.838 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.838 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:35.838 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.838 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.838 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.838 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.838 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.838 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.838 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.838 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.838 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.838 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.838 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.838 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.838 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.838 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.838 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.838 "name": "Existed_Raid", 00:09:35.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.838 "strip_size_kb": 64, 00:09:35.838 "state": "configuring", 00:09:35.838 "raid_level": "concat", 00:09:35.838 "superblock": false, 00:09:35.838 "num_base_bdevs": 3, 00:09:35.838 "num_base_bdevs_discovered": 2, 00:09:35.838 "num_base_bdevs_operational": 3, 00:09:35.838 "base_bdevs_list": [ 00:09:35.838 { 00:09:35.838 "name": null, 00:09:35.838 "uuid": "02bc811d-35b8-4335-ba99-78535cb9f776", 00:09:35.838 "is_configured": false, 00:09:35.838 "data_offset": 0, 00:09:35.838 "data_size": 65536 00:09:35.838 }, 00:09:35.838 { 00:09:35.838 "name": "BaseBdev2", 00:09:35.838 "uuid": "14ed0768-cf31-4e13-b7a4-937be6fd11f6", 00:09:35.838 "is_configured": true, 00:09:35.838 "data_offset": 0, 00:09:35.838 "data_size": 65536 00:09:35.838 }, 00:09:35.838 { 00:09:35.838 "name": "BaseBdev3", 00:09:35.838 "uuid": "ea5ffef8-75fc-491c-a0ce-5680dc22bba0", 00:09:35.838 "is_configured": true, 00:09:35.838 "data_offset": 0, 00:09:35.838 "data_size": 65536 00:09:35.838 } 00:09:35.838 ] 00:09:35.838 }' 00:09:35.838 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.838 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.406 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.406 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:36.406 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.406 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.406 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.406 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:36.406 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.406 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:36.406 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 02bc811d-35b8-4335-ba99-78535cb9f776 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.407 [2024-11-21 03:18:23.856946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:36.407 [2024-11-21 03:18:23.857006] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:36.407 [2024-11-21 03:18:23.857030] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:36.407 [2024-11-21 03:18:23.857303] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:09:36.407 [2024-11-21 03:18:23.857437] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:36.407 [2024-11-21 03:18:23.857463] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:36.407 [2024-11-21 03:18:23.857668] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.407 NewBaseBdev 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.407 [ 00:09:36.407 { 00:09:36.407 "name": "NewBaseBdev", 00:09:36.407 "aliases": [ 00:09:36.407 "02bc811d-35b8-4335-ba99-78535cb9f776" 00:09:36.407 ], 00:09:36.407 "product_name": "Malloc disk", 00:09:36.407 "block_size": 512, 00:09:36.407 "num_blocks": 65536, 00:09:36.407 "uuid": "02bc811d-35b8-4335-ba99-78535cb9f776", 00:09:36.407 "assigned_rate_limits": { 00:09:36.407 "rw_ios_per_sec": 0, 00:09:36.407 "rw_mbytes_per_sec": 0, 00:09:36.407 "r_mbytes_per_sec": 0, 00:09:36.407 "w_mbytes_per_sec": 0 00:09:36.407 }, 00:09:36.407 "claimed": true, 00:09:36.407 "claim_type": "exclusive_write", 00:09:36.407 "zoned": false, 00:09:36.407 "supported_io_types": { 00:09:36.407 "read": true, 00:09:36.407 "write": true, 00:09:36.407 "unmap": true, 00:09:36.407 "flush": true, 00:09:36.407 "reset": true, 00:09:36.407 "nvme_admin": false, 00:09:36.407 "nvme_io": false, 00:09:36.407 "nvme_io_md": false, 00:09:36.407 "write_zeroes": true, 00:09:36.407 "zcopy": true, 00:09:36.407 "get_zone_info": false, 00:09:36.407 "zone_management": false, 00:09:36.407 "zone_append": false, 00:09:36.407 "compare": false, 00:09:36.407 "compare_and_write": false, 00:09:36.407 "abort": true, 00:09:36.407 "seek_hole": false, 00:09:36.407 "seek_data": false, 00:09:36.407 "copy": true, 00:09:36.407 "nvme_iov_md": false 00:09:36.407 }, 00:09:36.407 "memory_domains": [ 00:09:36.407 { 00:09:36.407 "dma_device_id": "system", 00:09:36.407 "dma_device_type": 1 00:09:36.407 }, 00:09:36.407 { 00:09:36.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.407 "dma_device_type": 2 00:09:36.407 } 00:09:36.407 ], 00:09:36.407 "driver_specific": {} 00:09:36.407 } 00:09:36.407 ] 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.407 "name": "Existed_Raid", 00:09:36.407 "uuid": "e9199bd5-04f9-4be3-a0e4-4585b9954ffa", 00:09:36.407 "strip_size_kb": 64, 00:09:36.407 "state": "online", 00:09:36.407 "raid_level": "concat", 00:09:36.407 "superblock": false, 00:09:36.407 "num_base_bdevs": 3, 00:09:36.407 "num_base_bdevs_discovered": 3, 00:09:36.407 "num_base_bdevs_operational": 3, 00:09:36.407 "base_bdevs_list": [ 00:09:36.407 { 00:09:36.407 "name": "NewBaseBdev", 00:09:36.407 "uuid": "02bc811d-35b8-4335-ba99-78535cb9f776", 00:09:36.407 "is_configured": true, 00:09:36.407 "data_offset": 0, 00:09:36.407 "data_size": 65536 00:09:36.407 }, 00:09:36.407 { 00:09:36.407 "name": "BaseBdev2", 00:09:36.407 "uuid": "14ed0768-cf31-4e13-b7a4-937be6fd11f6", 00:09:36.407 "is_configured": true, 00:09:36.407 "data_offset": 0, 00:09:36.407 "data_size": 65536 00:09:36.407 }, 00:09:36.407 { 00:09:36.407 "name": "BaseBdev3", 00:09:36.407 "uuid": "ea5ffef8-75fc-491c-a0ce-5680dc22bba0", 00:09:36.407 "is_configured": true, 00:09:36.407 "data_offset": 0, 00:09:36.407 "data_size": 65536 00:09:36.407 } 00:09:36.407 ] 00:09:36.407 }' 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.407 03:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.977 03:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:36.977 03:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:36.977 03:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:36.977 03:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:36.977 03:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:36.977 03:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:36.977 03:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:36.977 03:18:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.977 03:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:36.977 03:18:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.977 [2024-11-21 03:18:24.409606] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:36.977 03:18:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.977 03:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:36.977 "name": "Existed_Raid", 00:09:36.977 "aliases": [ 00:09:36.977 "e9199bd5-04f9-4be3-a0e4-4585b9954ffa" 00:09:36.977 ], 00:09:36.977 "product_name": "Raid Volume", 00:09:36.977 "block_size": 512, 00:09:36.977 "num_blocks": 196608, 00:09:36.977 "uuid": "e9199bd5-04f9-4be3-a0e4-4585b9954ffa", 00:09:36.977 "assigned_rate_limits": { 00:09:36.977 "rw_ios_per_sec": 0, 00:09:36.977 "rw_mbytes_per_sec": 0, 00:09:36.977 "r_mbytes_per_sec": 0, 00:09:36.977 "w_mbytes_per_sec": 0 00:09:36.977 }, 00:09:36.977 "claimed": false, 00:09:36.977 "zoned": false, 00:09:36.977 "supported_io_types": { 00:09:36.977 "read": true, 00:09:36.977 "write": true, 00:09:36.977 "unmap": true, 00:09:36.977 "flush": true, 00:09:36.977 "reset": true, 00:09:36.977 "nvme_admin": false, 00:09:36.977 "nvme_io": false, 00:09:36.977 "nvme_io_md": false, 00:09:36.977 "write_zeroes": true, 00:09:36.977 "zcopy": false, 00:09:36.977 "get_zone_info": false, 00:09:36.977 "zone_management": false, 00:09:36.977 "zone_append": false, 00:09:36.977 "compare": false, 00:09:36.977 "compare_and_write": false, 00:09:36.977 "abort": false, 00:09:36.977 "seek_hole": false, 00:09:36.977 "seek_data": false, 00:09:36.977 "copy": false, 00:09:36.977 "nvme_iov_md": false 00:09:36.977 }, 00:09:36.977 "memory_domains": [ 00:09:36.977 { 00:09:36.977 "dma_device_id": "system", 00:09:36.977 "dma_device_type": 1 00:09:36.977 }, 00:09:36.977 { 00:09:36.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.977 "dma_device_type": 2 00:09:36.977 }, 00:09:36.977 { 00:09:36.977 "dma_device_id": "system", 00:09:36.977 "dma_device_type": 1 00:09:36.977 }, 00:09:36.977 { 00:09:36.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.977 "dma_device_type": 2 00:09:36.977 }, 00:09:36.977 { 00:09:36.977 "dma_device_id": "system", 00:09:36.977 "dma_device_type": 1 00:09:36.977 }, 00:09:36.977 { 00:09:36.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.977 "dma_device_type": 2 00:09:36.977 } 00:09:36.977 ], 00:09:36.977 "driver_specific": { 00:09:36.977 "raid": { 00:09:36.977 "uuid": "e9199bd5-04f9-4be3-a0e4-4585b9954ffa", 00:09:36.977 "strip_size_kb": 64, 00:09:36.977 "state": "online", 00:09:36.977 "raid_level": "concat", 00:09:36.977 "superblock": false, 00:09:36.977 "num_base_bdevs": 3, 00:09:36.977 "num_base_bdevs_discovered": 3, 00:09:36.977 "num_base_bdevs_operational": 3, 00:09:36.977 "base_bdevs_list": [ 00:09:36.977 { 00:09:36.977 "name": "NewBaseBdev", 00:09:36.977 "uuid": "02bc811d-35b8-4335-ba99-78535cb9f776", 00:09:36.977 "is_configured": true, 00:09:36.977 "data_offset": 0, 00:09:36.977 "data_size": 65536 00:09:36.977 }, 00:09:36.977 { 00:09:36.977 "name": "BaseBdev2", 00:09:36.977 "uuid": "14ed0768-cf31-4e13-b7a4-937be6fd11f6", 00:09:36.977 "is_configured": true, 00:09:36.977 "data_offset": 0, 00:09:36.977 "data_size": 65536 00:09:36.977 }, 00:09:36.977 { 00:09:36.977 "name": "BaseBdev3", 00:09:36.977 "uuid": "ea5ffef8-75fc-491c-a0ce-5680dc22bba0", 00:09:36.978 "is_configured": true, 00:09:36.978 "data_offset": 0, 00:09:36.978 "data_size": 65536 00:09:36.978 } 00:09:36.978 ] 00:09:36.978 } 00:09:36.978 } 00:09:36.978 }' 00:09:36.978 03:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:36.978 03:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:36.978 BaseBdev2 00:09:36.978 BaseBdev3' 00:09:36.978 03:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.237 [2024-11-21 03:18:24.709339] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:37.237 [2024-11-21 03:18:24.709387] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:37.237 [2024-11-21 03:18:24.709485] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:37.237 [2024-11-21 03:18:24.709556] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:37.237 [2024-11-21 03:18:24.709577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 78761 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 78761 ']' 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 78761 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78761 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:37.237 killing process with pid 78761 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78761' 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 78761 00:09:37.237 [2024-11-21 03:18:24.760995] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:37.237 03:18:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 78761 00:09:37.237 [2024-11-21 03:18:24.794426] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:37.497 03:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:37.497 00:09:37.497 real 0m9.291s 00:09:37.497 user 0m15.783s 00:09:37.497 sys 0m2.010s 00:09:37.497 03:18:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.497 03:18:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.497 ************************************ 00:09:37.497 END TEST raid_state_function_test 00:09:37.497 ************************************ 00:09:37.756 03:18:25 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:37.756 03:18:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:37.756 03:18:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.756 03:18:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:37.756 ************************************ 00:09:37.756 START TEST raid_state_function_test_sb 00:09:37.756 ************************************ 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=79371 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79371' 00:09:37.756 Process raid pid: 79371 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 79371 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 79371 ']' 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.756 03:18:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.756 [2024-11-21 03:18:25.206344] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:09:37.756 [2024-11-21 03:18:25.206497] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.016 [2024-11-21 03:18:25.354995] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:38.016 [2024-11-21 03:18:25.376761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.016 [2024-11-21 03:18:25.420600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.016 [2024-11-21 03:18:25.498617] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.016 [2024-11-21 03:18:25.498661] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.586 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:38.586 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:38.586 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:38.586 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.587 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.587 [2024-11-21 03:18:26.112886] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:38.587 [2024-11-21 03:18:26.112956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:38.587 [2024-11-21 03:18:26.112970] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:38.587 [2024-11-21 03:18:26.112978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:38.587 [2024-11-21 03:18:26.112994] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:38.587 [2024-11-21 03:18:26.113001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:38.587 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.587 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:38.587 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.587 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.587 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.587 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.587 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.587 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.587 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.587 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.587 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.587 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.587 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.587 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.587 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.587 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.587 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.587 "name": "Existed_Raid", 00:09:38.587 "uuid": "45541440-bad8-4eca-8644-ec25f2adaffc", 00:09:38.587 "strip_size_kb": 64, 00:09:38.587 "state": "configuring", 00:09:38.587 "raid_level": "concat", 00:09:38.587 "superblock": true, 00:09:38.587 "num_base_bdevs": 3, 00:09:38.587 "num_base_bdevs_discovered": 0, 00:09:38.587 "num_base_bdevs_operational": 3, 00:09:38.587 "base_bdevs_list": [ 00:09:38.587 { 00:09:38.587 "name": "BaseBdev1", 00:09:38.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.587 "is_configured": false, 00:09:38.587 "data_offset": 0, 00:09:38.587 "data_size": 0 00:09:38.587 }, 00:09:38.587 { 00:09:38.587 "name": "BaseBdev2", 00:09:38.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.587 "is_configured": false, 00:09:38.587 "data_offset": 0, 00:09:38.587 "data_size": 0 00:09:38.587 }, 00:09:38.587 { 00:09:38.587 "name": "BaseBdev3", 00:09:38.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.587 "is_configured": false, 00:09:38.587 "data_offset": 0, 00:09:38.587 "data_size": 0 00:09:38.587 } 00:09:38.587 ] 00:09:38.587 }' 00:09:38.587 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.587 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.171 [2024-11-21 03:18:26.516890] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:39.171 [2024-11-21 03:18:26.516937] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.171 [2024-11-21 03:18:26.528916] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:39.171 [2024-11-21 03:18:26.528959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:39.171 [2024-11-21 03:18:26.528971] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:39.171 [2024-11-21 03:18:26.528995] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:39.171 [2024-11-21 03:18:26.529005] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:39.171 [2024-11-21 03:18:26.529015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.171 [2024-11-21 03:18:26.556172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:39.171 BaseBdev1 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.171 [ 00:09:39.171 { 00:09:39.171 "name": "BaseBdev1", 00:09:39.171 "aliases": [ 00:09:39.171 "ef6e07f3-31d9-463d-98b1-528429d63c66" 00:09:39.171 ], 00:09:39.171 "product_name": "Malloc disk", 00:09:39.171 "block_size": 512, 00:09:39.171 "num_blocks": 65536, 00:09:39.171 "uuid": "ef6e07f3-31d9-463d-98b1-528429d63c66", 00:09:39.171 "assigned_rate_limits": { 00:09:39.171 "rw_ios_per_sec": 0, 00:09:39.171 "rw_mbytes_per_sec": 0, 00:09:39.171 "r_mbytes_per_sec": 0, 00:09:39.171 "w_mbytes_per_sec": 0 00:09:39.171 }, 00:09:39.171 "claimed": true, 00:09:39.171 "claim_type": "exclusive_write", 00:09:39.171 "zoned": false, 00:09:39.171 "supported_io_types": { 00:09:39.171 "read": true, 00:09:39.171 "write": true, 00:09:39.171 "unmap": true, 00:09:39.171 "flush": true, 00:09:39.171 "reset": true, 00:09:39.171 "nvme_admin": false, 00:09:39.171 "nvme_io": false, 00:09:39.171 "nvme_io_md": false, 00:09:39.171 "write_zeroes": true, 00:09:39.171 "zcopy": true, 00:09:39.171 "get_zone_info": false, 00:09:39.171 "zone_management": false, 00:09:39.171 "zone_append": false, 00:09:39.171 "compare": false, 00:09:39.171 "compare_and_write": false, 00:09:39.171 "abort": true, 00:09:39.171 "seek_hole": false, 00:09:39.171 "seek_data": false, 00:09:39.171 "copy": true, 00:09:39.171 "nvme_iov_md": false 00:09:39.171 }, 00:09:39.171 "memory_domains": [ 00:09:39.171 { 00:09:39.171 "dma_device_id": "system", 00:09:39.171 "dma_device_type": 1 00:09:39.171 }, 00:09:39.171 { 00:09:39.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.171 "dma_device_type": 2 00:09:39.171 } 00:09:39.171 ], 00:09:39.171 "driver_specific": {} 00:09:39.171 } 00:09:39.171 ] 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.171 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.171 "name": "Existed_Raid", 00:09:39.171 "uuid": "35f65e61-34cd-40dd-9e78-bd8db765c618", 00:09:39.171 "strip_size_kb": 64, 00:09:39.171 "state": "configuring", 00:09:39.171 "raid_level": "concat", 00:09:39.171 "superblock": true, 00:09:39.171 "num_base_bdevs": 3, 00:09:39.171 "num_base_bdevs_discovered": 1, 00:09:39.171 "num_base_bdevs_operational": 3, 00:09:39.171 "base_bdevs_list": [ 00:09:39.171 { 00:09:39.171 "name": "BaseBdev1", 00:09:39.171 "uuid": "ef6e07f3-31d9-463d-98b1-528429d63c66", 00:09:39.172 "is_configured": true, 00:09:39.172 "data_offset": 2048, 00:09:39.172 "data_size": 63488 00:09:39.172 }, 00:09:39.172 { 00:09:39.172 "name": "BaseBdev2", 00:09:39.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.172 "is_configured": false, 00:09:39.172 "data_offset": 0, 00:09:39.172 "data_size": 0 00:09:39.172 }, 00:09:39.172 { 00:09:39.172 "name": "BaseBdev3", 00:09:39.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.172 "is_configured": false, 00:09:39.172 "data_offset": 0, 00:09:39.172 "data_size": 0 00:09:39.172 } 00:09:39.172 ] 00:09:39.172 }' 00:09:39.172 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.172 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.431 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:39.431 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.431 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.431 [2024-11-21 03:18:26.960331] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:39.431 [2024-11-21 03:18:26.960398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:39.431 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.431 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:39.431 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.431 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.431 [2024-11-21 03:18:26.972379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:39.431 [2024-11-21 03:18:26.974722] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:39.431 [2024-11-21 03:18:26.974763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:39.431 [2024-11-21 03:18:26.974777] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:39.431 [2024-11-21 03:18:26.974785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:39.431 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.431 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:39.431 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:39.431 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:39.431 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.431 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.431 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.431 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.431 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.431 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.431 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.431 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.432 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.432 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.432 03:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.432 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.432 03:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.691 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.691 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.691 "name": "Existed_Raid", 00:09:39.691 "uuid": "6c4785de-fe6c-4c2b-95e4-63bb79185495", 00:09:39.691 "strip_size_kb": 64, 00:09:39.691 "state": "configuring", 00:09:39.691 "raid_level": "concat", 00:09:39.691 "superblock": true, 00:09:39.691 "num_base_bdevs": 3, 00:09:39.691 "num_base_bdevs_discovered": 1, 00:09:39.691 "num_base_bdevs_operational": 3, 00:09:39.691 "base_bdevs_list": [ 00:09:39.691 { 00:09:39.691 "name": "BaseBdev1", 00:09:39.691 "uuid": "ef6e07f3-31d9-463d-98b1-528429d63c66", 00:09:39.691 "is_configured": true, 00:09:39.691 "data_offset": 2048, 00:09:39.691 "data_size": 63488 00:09:39.691 }, 00:09:39.691 { 00:09:39.691 "name": "BaseBdev2", 00:09:39.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.691 "is_configured": false, 00:09:39.691 "data_offset": 0, 00:09:39.691 "data_size": 0 00:09:39.691 }, 00:09:39.691 { 00:09:39.691 "name": "BaseBdev3", 00:09:39.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.691 "is_configured": false, 00:09:39.691 "data_offset": 0, 00:09:39.691 "data_size": 0 00:09:39.691 } 00:09:39.691 ] 00:09:39.691 }' 00:09:39.691 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.691 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.950 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:39.950 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.950 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.950 [2024-11-21 03:18:27.417473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:39.950 BaseBdev2 00:09:39.950 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.950 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:39.950 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:39.950 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:39.950 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:39.950 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:39.950 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:39.950 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:39.950 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.950 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.950 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.951 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:39.951 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.951 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.951 [ 00:09:39.951 { 00:09:39.951 "name": "BaseBdev2", 00:09:39.951 "aliases": [ 00:09:39.951 "887fa363-71ed-43af-be9a-2d112bde5bab" 00:09:39.951 ], 00:09:39.951 "product_name": "Malloc disk", 00:09:39.951 "block_size": 512, 00:09:39.951 "num_blocks": 65536, 00:09:39.951 "uuid": "887fa363-71ed-43af-be9a-2d112bde5bab", 00:09:39.951 "assigned_rate_limits": { 00:09:39.951 "rw_ios_per_sec": 0, 00:09:39.951 "rw_mbytes_per_sec": 0, 00:09:39.951 "r_mbytes_per_sec": 0, 00:09:39.951 "w_mbytes_per_sec": 0 00:09:39.951 }, 00:09:39.951 "claimed": true, 00:09:39.951 "claim_type": "exclusive_write", 00:09:39.951 "zoned": false, 00:09:39.951 "supported_io_types": { 00:09:39.951 "read": true, 00:09:39.951 "write": true, 00:09:39.951 "unmap": true, 00:09:39.951 "flush": true, 00:09:39.951 "reset": true, 00:09:39.951 "nvme_admin": false, 00:09:39.951 "nvme_io": false, 00:09:39.951 "nvme_io_md": false, 00:09:39.951 "write_zeroes": true, 00:09:39.951 "zcopy": true, 00:09:39.951 "get_zone_info": false, 00:09:39.951 "zone_management": false, 00:09:39.951 "zone_append": false, 00:09:39.951 "compare": false, 00:09:39.951 "compare_and_write": false, 00:09:39.951 "abort": true, 00:09:39.951 "seek_hole": false, 00:09:39.951 "seek_data": false, 00:09:39.951 "copy": true, 00:09:39.951 "nvme_iov_md": false 00:09:39.951 }, 00:09:39.951 "memory_domains": [ 00:09:39.951 { 00:09:39.951 "dma_device_id": "system", 00:09:39.951 "dma_device_type": 1 00:09:39.951 }, 00:09:39.951 { 00:09:39.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.951 "dma_device_type": 2 00:09:39.951 } 00:09:39.951 ], 00:09:39.951 "driver_specific": {} 00:09:39.951 } 00:09:39.951 ] 00:09:39.951 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.951 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:39.951 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:39.951 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:39.951 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:39.951 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.951 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.951 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.951 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.951 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.951 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.951 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.951 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.951 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.951 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.951 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.951 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.951 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.951 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.951 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.951 "name": "Existed_Raid", 00:09:39.951 "uuid": "6c4785de-fe6c-4c2b-95e4-63bb79185495", 00:09:39.951 "strip_size_kb": 64, 00:09:39.951 "state": "configuring", 00:09:39.951 "raid_level": "concat", 00:09:39.951 "superblock": true, 00:09:39.951 "num_base_bdevs": 3, 00:09:39.951 "num_base_bdevs_discovered": 2, 00:09:39.951 "num_base_bdevs_operational": 3, 00:09:39.951 "base_bdevs_list": [ 00:09:39.951 { 00:09:39.951 "name": "BaseBdev1", 00:09:39.951 "uuid": "ef6e07f3-31d9-463d-98b1-528429d63c66", 00:09:39.951 "is_configured": true, 00:09:39.951 "data_offset": 2048, 00:09:39.951 "data_size": 63488 00:09:39.951 }, 00:09:39.951 { 00:09:39.951 "name": "BaseBdev2", 00:09:39.951 "uuid": "887fa363-71ed-43af-be9a-2d112bde5bab", 00:09:39.951 "is_configured": true, 00:09:39.951 "data_offset": 2048, 00:09:39.951 "data_size": 63488 00:09:39.951 }, 00:09:39.951 { 00:09:39.951 "name": "BaseBdev3", 00:09:39.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.951 "is_configured": false, 00:09:39.951 "data_offset": 0, 00:09:39.951 "data_size": 0 00:09:39.951 } 00:09:39.951 ] 00:09:39.951 }' 00:09:39.951 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.951 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.520 [2024-11-21 03:18:27.922898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:40.520 [2024-11-21 03:18:27.923201] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:40.520 [2024-11-21 03:18:27.923254] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:40.520 BaseBdev3 00:09:40.520 [2024-11-21 03:18:27.923715] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:40.520 [2024-11-21 03:18:27.923905] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:40.520 [2024-11-21 03:18:27.923943] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:09:40.520 [2024-11-21 03:18:27.924203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.520 [ 00:09:40.520 { 00:09:40.520 "name": "BaseBdev3", 00:09:40.520 "aliases": [ 00:09:40.520 "8d0ef320-acfe-4c89-806a-5b8c1615c7c2" 00:09:40.520 ], 00:09:40.520 "product_name": "Malloc disk", 00:09:40.520 "block_size": 512, 00:09:40.520 "num_blocks": 65536, 00:09:40.520 "uuid": "8d0ef320-acfe-4c89-806a-5b8c1615c7c2", 00:09:40.520 "assigned_rate_limits": { 00:09:40.520 "rw_ios_per_sec": 0, 00:09:40.520 "rw_mbytes_per_sec": 0, 00:09:40.520 "r_mbytes_per_sec": 0, 00:09:40.520 "w_mbytes_per_sec": 0 00:09:40.520 }, 00:09:40.520 "claimed": true, 00:09:40.520 "claim_type": "exclusive_write", 00:09:40.520 "zoned": false, 00:09:40.520 "supported_io_types": { 00:09:40.520 "read": true, 00:09:40.520 "write": true, 00:09:40.520 "unmap": true, 00:09:40.520 "flush": true, 00:09:40.520 "reset": true, 00:09:40.520 "nvme_admin": false, 00:09:40.520 "nvme_io": false, 00:09:40.520 "nvme_io_md": false, 00:09:40.520 "write_zeroes": true, 00:09:40.520 "zcopy": true, 00:09:40.520 "get_zone_info": false, 00:09:40.520 "zone_management": false, 00:09:40.520 "zone_append": false, 00:09:40.520 "compare": false, 00:09:40.520 "compare_and_write": false, 00:09:40.520 "abort": true, 00:09:40.520 "seek_hole": false, 00:09:40.520 "seek_data": false, 00:09:40.520 "copy": true, 00:09:40.520 "nvme_iov_md": false 00:09:40.520 }, 00:09:40.520 "memory_domains": [ 00:09:40.520 { 00:09:40.520 "dma_device_id": "system", 00:09:40.520 "dma_device_type": 1 00:09:40.520 }, 00:09:40.520 { 00:09:40.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.520 "dma_device_type": 2 00:09:40.520 } 00:09:40.520 ], 00:09:40.520 "driver_specific": {} 00:09:40.520 } 00:09:40.520 ] 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.520 03:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.520 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.520 "name": "Existed_Raid", 00:09:40.520 "uuid": "6c4785de-fe6c-4c2b-95e4-63bb79185495", 00:09:40.520 "strip_size_kb": 64, 00:09:40.520 "state": "online", 00:09:40.520 "raid_level": "concat", 00:09:40.520 "superblock": true, 00:09:40.520 "num_base_bdevs": 3, 00:09:40.520 "num_base_bdevs_discovered": 3, 00:09:40.520 "num_base_bdevs_operational": 3, 00:09:40.520 "base_bdevs_list": [ 00:09:40.520 { 00:09:40.520 "name": "BaseBdev1", 00:09:40.520 "uuid": "ef6e07f3-31d9-463d-98b1-528429d63c66", 00:09:40.520 "is_configured": true, 00:09:40.520 "data_offset": 2048, 00:09:40.520 "data_size": 63488 00:09:40.520 }, 00:09:40.520 { 00:09:40.520 "name": "BaseBdev2", 00:09:40.520 "uuid": "887fa363-71ed-43af-be9a-2d112bde5bab", 00:09:40.520 "is_configured": true, 00:09:40.520 "data_offset": 2048, 00:09:40.520 "data_size": 63488 00:09:40.520 }, 00:09:40.520 { 00:09:40.520 "name": "BaseBdev3", 00:09:40.520 "uuid": "8d0ef320-acfe-4c89-806a-5b8c1615c7c2", 00:09:40.520 "is_configured": true, 00:09:40.520 "data_offset": 2048, 00:09:40.520 "data_size": 63488 00:09:40.520 } 00:09:40.520 ] 00:09:40.520 }' 00:09:40.520 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.520 03:18:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.089 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:41.089 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:41.089 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:41.089 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:41.089 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:41.089 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:41.089 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:41.089 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:41.089 03:18:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.089 03:18:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.089 [2024-11-21 03:18:28.439422] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.089 03:18:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.089 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:41.089 "name": "Existed_Raid", 00:09:41.089 "aliases": [ 00:09:41.089 "6c4785de-fe6c-4c2b-95e4-63bb79185495" 00:09:41.089 ], 00:09:41.089 "product_name": "Raid Volume", 00:09:41.089 "block_size": 512, 00:09:41.089 "num_blocks": 190464, 00:09:41.089 "uuid": "6c4785de-fe6c-4c2b-95e4-63bb79185495", 00:09:41.089 "assigned_rate_limits": { 00:09:41.089 "rw_ios_per_sec": 0, 00:09:41.089 "rw_mbytes_per_sec": 0, 00:09:41.089 "r_mbytes_per_sec": 0, 00:09:41.089 "w_mbytes_per_sec": 0 00:09:41.089 }, 00:09:41.089 "claimed": false, 00:09:41.089 "zoned": false, 00:09:41.089 "supported_io_types": { 00:09:41.089 "read": true, 00:09:41.089 "write": true, 00:09:41.089 "unmap": true, 00:09:41.089 "flush": true, 00:09:41.089 "reset": true, 00:09:41.089 "nvme_admin": false, 00:09:41.089 "nvme_io": false, 00:09:41.089 "nvme_io_md": false, 00:09:41.089 "write_zeroes": true, 00:09:41.089 "zcopy": false, 00:09:41.089 "get_zone_info": false, 00:09:41.089 "zone_management": false, 00:09:41.089 "zone_append": false, 00:09:41.089 "compare": false, 00:09:41.089 "compare_and_write": false, 00:09:41.089 "abort": false, 00:09:41.089 "seek_hole": false, 00:09:41.089 "seek_data": false, 00:09:41.089 "copy": false, 00:09:41.089 "nvme_iov_md": false 00:09:41.089 }, 00:09:41.089 "memory_domains": [ 00:09:41.089 { 00:09:41.089 "dma_device_id": "system", 00:09:41.089 "dma_device_type": 1 00:09:41.089 }, 00:09:41.089 { 00:09:41.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.089 "dma_device_type": 2 00:09:41.089 }, 00:09:41.089 { 00:09:41.089 "dma_device_id": "system", 00:09:41.089 "dma_device_type": 1 00:09:41.089 }, 00:09:41.089 { 00:09:41.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.089 "dma_device_type": 2 00:09:41.089 }, 00:09:41.089 { 00:09:41.089 "dma_device_id": "system", 00:09:41.089 "dma_device_type": 1 00:09:41.089 }, 00:09:41.089 { 00:09:41.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.089 "dma_device_type": 2 00:09:41.089 } 00:09:41.089 ], 00:09:41.089 "driver_specific": { 00:09:41.089 "raid": { 00:09:41.089 "uuid": "6c4785de-fe6c-4c2b-95e4-63bb79185495", 00:09:41.089 "strip_size_kb": 64, 00:09:41.089 "state": "online", 00:09:41.089 "raid_level": "concat", 00:09:41.089 "superblock": true, 00:09:41.089 "num_base_bdevs": 3, 00:09:41.089 "num_base_bdevs_discovered": 3, 00:09:41.089 "num_base_bdevs_operational": 3, 00:09:41.089 "base_bdevs_list": [ 00:09:41.089 { 00:09:41.089 "name": "BaseBdev1", 00:09:41.089 "uuid": "ef6e07f3-31d9-463d-98b1-528429d63c66", 00:09:41.089 "is_configured": true, 00:09:41.089 "data_offset": 2048, 00:09:41.089 "data_size": 63488 00:09:41.089 }, 00:09:41.089 { 00:09:41.089 "name": "BaseBdev2", 00:09:41.089 "uuid": "887fa363-71ed-43af-be9a-2d112bde5bab", 00:09:41.089 "is_configured": true, 00:09:41.089 "data_offset": 2048, 00:09:41.089 "data_size": 63488 00:09:41.089 }, 00:09:41.089 { 00:09:41.089 "name": "BaseBdev3", 00:09:41.089 "uuid": "8d0ef320-acfe-4c89-806a-5b8c1615c7c2", 00:09:41.089 "is_configured": true, 00:09:41.089 "data_offset": 2048, 00:09:41.089 "data_size": 63488 00:09:41.089 } 00:09:41.089 ] 00:09:41.089 } 00:09:41.089 } 00:09:41.089 }' 00:09:41.089 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:41.089 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:41.089 BaseBdev2 00:09:41.089 BaseBdev3' 00:09:41.089 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.089 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:41.089 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.090 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.090 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:41.090 03:18:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.090 03:18:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.090 03:18:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.090 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.090 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.090 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.090 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:41.090 03:18:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.090 03:18:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.090 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.090 03:18:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.349 [2024-11-21 03:18:28.711204] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:41.349 [2024-11-21 03:18:28.711243] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:41.349 [2024-11-21 03:18:28.711314] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.349 "name": "Existed_Raid", 00:09:41.349 "uuid": "6c4785de-fe6c-4c2b-95e4-63bb79185495", 00:09:41.349 "strip_size_kb": 64, 00:09:41.349 "state": "offline", 00:09:41.349 "raid_level": "concat", 00:09:41.349 "superblock": true, 00:09:41.349 "num_base_bdevs": 3, 00:09:41.349 "num_base_bdevs_discovered": 2, 00:09:41.349 "num_base_bdevs_operational": 2, 00:09:41.349 "base_bdevs_list": [ 00:09:41.349 { 00:09:41.349 "name": null, 00:09:41.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.349 "is_configured": false, 00:09:41.349 "data_offset": 0, 00:09:41.349 "data_size": 63488 00:09:41.349 }, 00:09:41.349 { 00:09:41.349 "name": "BaseBdev2", 00:09:41.349 "uuid": "887fa363-71ed-43af-be9a-2d112bde5bab", 00:09:41.349 "is_configured": true, 00:09:41.349 "data_offset": 2048, 00:09:41.349 "data_size": 63488 00:09:41.349 }, 00:09:41.349 { 00:09:41.349 "name": "BaseBdev3", 00:09:41.349 "uuid": "8d0ef320-acfe-4c89-806a-5b8c1615c7c2", 00:09:41.349 "is_configured": true, 00:09:41.349 "data_offset": 2048, 00:09:41.349 "data_size": 63488 00:09:41.349 } 00:09:41.349 ] 00:09:41.349 }' 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.349 03:18:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.609 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:41.609 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:41.609 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.609 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.609 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.609 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:41.867 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.868 [2024-11-21 03:18:29.220434] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.868 [2024-11-21 03:18:29.301187] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:41.868 [2024-11-21 03:18:29.301259] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.868 BaseBdev2 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.868 [ 00:09:41.868 { 00:09:41.868 "name": "BaseBdev2", 00:09:41.868 "aliases": [ 00:09:41.868 "66fe5ff1-b15f-448e-8995-beddf44403b5" 00:09:41.868 ], 00:09:41.868 "product_name": "Malloc disk", 00:09:41.868 "block_size": 512, 00:09:41.868 "num_blocks": 65536, 00:09:41.868 "uuid": "66fe5ff1-b15f-448e-8995-beddf44403b5", 00:09:41.868 "assigned_rate_limits": { 00:09:41.868 "rw_ios_per_sec": 0, 00:09:41.868 "rw_mbytes_per_sec": 0, 00:09:41.868 "r_mbytes_per_sec": 0, 00:09:41.868 "w_mbytes_per_sec": 0 00:09:41.868 }, 00:09:41.868 "claimed": false, 00:09:41.868 "zoned": false, 00:09:41.868 "supported_io_types": { 00:09:41.868 "read": true, 00:09:41.868 "write": true, 00:09:41.868 "unmap": true, 00:09:41.868 "flush": true, 00:09:41.868 "reset": true, 00:09:41.868 "nvme_admin": false, 00:09:41.868 "nvme_io": false, 00:09:41.868 "nvme_io_md": false, 00:09:41.868 "write_zeroes": true, 00:09:41.868 "zcopy": true, 00:09:41.868 "get_zone_info": false, 00:09:41.868 "zone_management": false, 00:09:41.868 "zone_append": false, 00:09:41.868 "compare": false, 00:09:41.868 "compare_and_write": false, 00:09:41.868 "abort": true, 00:09:41.868 "seek_hole": false, 00:09:41.868 "seek_data": false, 00:09:41.868 "copy": true, 00:09:41.868 "nvme_iov_md": false 00:09:41.868 }, 00:09:41.868 "memory_domains": [ 00:09:41.868 { 00:09:41.868 "dma_device_id": "system", 00:09:41.868 "dma_device_type": 1 00:09:41.868 }, 00:09:41.868 { 00:09:41.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.868 "dma_device_type": 2 00:09:41.868 } 00:09:41.868 ], 00:09:41.868 "driver_specific": {} 00:09:41.868 } 00:09:41.868 ] 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.868 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.127 BaseBdev3 00:09:42.127 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.127 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:42.127 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:42.127 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:42.127 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:42.127 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:42.127 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:42.127 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:42.127 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.127 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.127 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.127 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:42.127 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.127 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.127 [ 00:09:42.127 { 00:09:42.127 "name": "BaseBdev3", 00:09:42.127 "aliases": [ 00:09:42.127 "2517698f-1381-4c11-9c1b-f30246692d85" 00:09:42.127 ], 00:09:42.127 "product_name": "Malloc disk", 00:09:42.127 "block_size": 512, 00:09:42.127 "num_blocks": 65536, 00:09:42.127 "uuid": "2517698f-1381-4c11-9c1b-f30246692d85", 00:09:42.127 "assigned_rate_limits": { 00:09:42.127 "rw_ios_per_sec": 0, 00:09:42.127 "rw_mbytes_per_sec": 0, 00:09:42.127 "r_mbytes_per_sec": 0, 00:09:42.127 "w_mbytes_per_sec": 0 00:09:42.127 }, 00:09:42.127 "claimed": false, 00:09:42.127 "zoned": false, 00:09:42.127 "supported_io_types": { 00:09:42.127 "read": true, 00:09:42.127 "write": true, 00:09:42.127 "unmap": true, 00:09:42.127 "flush": true, 00:09:42.127 "reset": true, 00:09:42.127 "nvme_admin": false, 00:09:42.127 "nvme_io": false, 00:09:42.127 "nvme_io_md": false, 00:09:42.127 "write_zeroes": true, 00:09:42.127 "zcopy": true, 00:09:42.127 "get_zone_info": false, 00:09:42.127 "zone_management": false, 00:09:42.127 "zone_append": false, 00:09:42.127 "compare": false, 00:09:42.127 "compare_and_write": false, 00:09:42.127 "abort": true, 00:09:42.127 "seek_hole": false, 00:09:42.127 "seek_data": false, 00:09:42.127 "copy": true, 00:09:42.127 "nvme_iov_md": false 00:09:42.127 }, 00:09:42.127 "memory_domains": [ 00:09:42.127 { 00:09:42.127 "dma_device_id": "system", 00:09:42.127 "dma_device_type": 1 00:09:42.127 }, 00:09:42.127 { 00:09:42.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.127 "dma_device_type": 2 00:09:42.127 } 00:09:42.127 ], 00:09:42.127 "driver_specific": {} 00:09:42.127 } 00:09:42.127 ] 00:09:42.127 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.127 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:42.127 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:42.127 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:42.127 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:42.127 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.127 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.127 [2024-11-21 03:18:29.478858] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:42.127 [2024-11-21 03:18:29.478929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:42.127 [2024-11-21 03:18:29.478949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:42.127 [2024-11-21 03:18:29.481300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:42.127 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.127 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:42.127 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.127 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.128 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:42.128 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.128 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.128 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.128 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.128 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.128 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.128 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.128 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.128 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.128 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.128 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.128 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.128 "name": "Existed_Raid", 00:09:42.128 "uuid": "dc20f4f1-483d-4035-afae-6ca573a88908", 00:09:42.128 "strip_size_kb": 64, 00:09:42.128 "state": "configuring", 00:09:42.128 "raid_level": "concat", 00:09:42.128 "superblock": true, 00:09:42.128 "num_base_bdevs": 3, 00:09:42.128 "num_base_bdevs_discovered": 2, 00:09:42.128 "num_base_bdevs_operational": 3, 00:09:42.128 "base_bdevs_list": [ 00:09:42.128 { 00:09:42.128 "name": "BaseBdev1", 00:09:42.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.128 "is_configured": false, 00:09:42.128 "data_offset": 0, 00:09:42.128 "data_size": 0 00:09:42.128 }, 00:09:42.128 { 00:09:42.128 "name": "BaseBdev2", 00:09:42.128 "uuid": "66fe5ff1-b15f-448e-8995-beddf44403b5", 00:09:42.128 "is_configured": true, 00:09:42.128 "data_offset": 2048, 00:09:42.128 "data_size": 63488 00:09:42.128 }, 00:09:42.128 { 00:09:42.128 "name": "BaseBdev3", 00:09:42.128 "uuid": "2517698f-1381-4c11-9c1b-f30246692d85", 00:09:42.128 "is_configured": true, 00:09:42.128 "data_offset": 2048, 00:09:42.128 "data_size": 63488 00:09:42.128 } 00:09:42.128 ] 00:09:42.128 }' 00:09:42.128 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.128 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.387 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:42.387 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.387 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.387 [2024-11-21 03:18:29.915005] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:42.387 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.387 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:42.387 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.387 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.387 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:42.387 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.387 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.387 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.388 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.388 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.388 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.388 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.388 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.388 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.388 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.388 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.646 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.646 "name": "Existed_Raid", 00:09:42.646 "uuid": "dc20f4f1-483d-4035-afae-6ca573a88908", 00:09:42.646 "strip_size_kb": 64, 00:09:42.646 "state": "configuring", 00:09:42.646 "raid_level": "concat", 00:09:42.646 "superblock": true, 00:09:42.646 "num_base_bdevs": 3, 00:09:42.646 "num_base_bdevs_discovered": 1, 00:09:42.646 "num_base_bdevs_operational": 3, 00:09:42.646 "base_bdevs_list": [ 00:09:42.646 { 00:09:42.646 "name": "BaseBdev1", 00:09:42.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.646 "is_configured": false, 00:09:42.646 "data_offset": 0, 00:09:42.646 "data_size": 0 00:09:42.646 }, 00:09:42.646 { 00:09:42.646 "name": null, 00:09:42.646 "uuid": "66fe5ff1-b15f-448e-8995-beddf44403b5", 00:09:42.646 "is_configured": false, 00:09:42.646 "data_offset": 0, 00:09:42.646 "data_size": 63488 00:09:42.646 }, 00:09:42.646 { 00:09:42.646 "name": "BaseBdev3", 00:09:42.646 "uuid": "2517698f-1381-4c11-9c1b-f30246692d85", 00:09:42.646 "is_configured": true, 00:09:42.646 "data_offset": 2048, 00:09:42.646 "data_size": 63488 00:09:42.646 } 00:09:42.646 ] 00:09:42.646 }' 00:09:42.646 03:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.646 03:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.906 03:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:42.906 03:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.906 03:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.906 03:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.906 03:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.906 03:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:42.906 03:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:42.906 03:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.906 03:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.906 [2024-11-21 03:18:30.455860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:42.906 BaseBdev1 00:09:42.906 03:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.906 03:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:42.906 03:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:42.906 03:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:42.906 03:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:42.906 03:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:42.906 03:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:42.906 03:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:42.906 03:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.906 03:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.906 03:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.906 03:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:42.906 03:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.906 03:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.165 [ 00:09:43.165 { 00:09:43.165 "name": "BaseBdev1", 00:09:43.165 "aliases": [ 00:09:43.165 "07657465-5181-4ac9-ad63-c40826bad8b2" 00:09:43.165 ], 00:09:43.165 "product_name": "Malloc disk", 00:09:43.165 "block_size": 512, 00:09:43.165 "num_blocks": 65536, 00:09:43.165 "uuid": "07657465-5181-4ac9-ad63-c40826bad8b2", 00:09:43.165 "assigned_rate_limits": { 00:09:43.165 "rw_ios_per_sec": 0, 00:09:43.165 "rw_mbytes_per_sec": 0, 00:09:43.165 "r_mbytes_per_sec": 0, 00:09:43.165 "w_mbytes_per_sec": 0 00:09:43.165 }, 00:09:43.165 "claimed": true, 00:09:43.165 "claim_type": "exclusive_write", 00:09:43.165 "zoned": false, 00:09:43.165 "supported_io_types": { 00:09:43.165 "read": true, 00:09:43.165 "write": true, 00:09:43.165 "unmap": true, 00:09:43.165 "flush": true, 00:09:43.165 "reset": true, 00:09:43.165 "nvme_admin": false, 00:09:43.165 "nvme_io": false, 00:09:43.165 "nvme_io_md": false, 00:09:43.165 "write_zeroes": true, 00:09:43.165 "zcopy": true, 00:09:43.165 "get_zone_info": false, 00:09:43.165 "zone_management": false, 00:09:43.165 "zone_append": false, 00:09:43.165 "compare": false, 00:09:43.165 "compare_and_write": false, 00:09:43.165 "abort": true, 00:09:43.165 "seek_hole": false, 00:09:43.165 "seek_data": false, 00:09:43.165 "copy": true, 00:09:43.165 "nvme_iov_md": false 00:09:43.165 }, 00:09:43.165 "memory_domains": [ 00:09:43.165 { 00:09:43.165 "dma_device_id": "system", 00:09:43.165 "dma_device_type": 1 00:09:43.165 }, 00:09:43.165 { 00:09:43.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.165 "dma_device_type": 2 00:09:43.165 } 00:09:43.165 ], 00:09:43.165 "driver_specific": {} 00:09:43.165 } 00:09:43.165 ] 00:09:43.165 03:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.165 03:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:43.165 03:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:43.165 03:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.165 03:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.165 03:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.165 03:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.165 03:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.165 03:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.165 03:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.165 03:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.165 03:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.165 03:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.165 03:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.165 03:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.165 03:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.166 03:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.166 03:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.166 "name": "Existed_Raid", 00:09:43.166 "uuid": "dc20f4f1-483d-4035-afae-6ca573a88908", 00:09:43.166 "strip_size_kb": 64, 00:09:43.166 "state": "configuring", 00:09:43.166 "raid_level": "concat", 00:09:43.166 "superblock": true, 00:09:43.166 "num_base_bdevs": 3, 00:09:43.166 "num_base_bdevs_discovered": 2, 00:09:43.166 "num_base_bdevs_operational": 3, 00:09:43.166 "base_bdevs_list": [ 00:09:43.166 { 00:09:43.166 "name": "BaseBdev1", 00:09:43.166 "uuid": "07657465-5181-4ac9-ad63-c40826bad8b2", 00:09:43.166 "is_configured": true, 00:09:43.166 "data_offset": 2048, 00:09:43.166 "data_size": 63488 00:09:43.166 }, 00:09:43.166 { 00:09:43.166 "name": null, 00:09:43.166 "uuid": "66fe5ff1-b15f-448e-8995-beddf44403b5", 00:09:43.166 "is_configured": false, 00:09:43.166 "data_offset": 0, 00:09:43.166 "data_size": 63488 00:09:43.166 }, 00:09:43.166 { 00:09:43.166 "name": "BaseBdev3", 00:09:43.166 "uuid": "2517698f-1381-4c11-9c1b-f30246692d85", 00:09:43.166 "is_configured": true, 00:09:43.166 "data_offset": 2048, 00:09:43.166 "data_size": 63488 00:09:43.166 } 00:09:43.166 ] 00:09:43.166 }' 00:09:43.166 03:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.166 03:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.425 03:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:43.425 03:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.425 03:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.425 03:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.425 03:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.425 03:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:43.425 03:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:43.425 03:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.425 03:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.684 [2024-11-21 03:18:30.992134] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:43.684 03:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.684 03:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:43.684 03:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.684 03:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.684 03:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.684 03:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.684 03:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.684 03:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.684 03:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.684 03:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.684 03:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.684 03:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.684 03:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.684 03:18:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.684 03:18:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.684 03:18:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.684 03:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.684 "name": "Existed_Raid", 00:09:43.684 "uuid": "dc20f4f1-483d-4035-afae-6ca573a88908", 00:09:43.684 "strip_size_kb": 64, 00:09:43.684 "state": "configuring", 00:09:43.684 "raid_level": "concat", 00:09:43.684 "superblock": true, 00:09:43.684 "num_base_bdevs": 3, 00:09:43.684 "num_base_bdevs_discovered": 1, 00:09:43.684 "num_base_bdevs_operational": 3, 00:09:43.684 "base_bdevs_list": [ 00:09:43.684 { 00:09:43.684 "name": "BaseBdev1", 00:09:43.684 "uuid": "07657465-5181-4ac9-ad63-c40826bad8b2", 00:09:43.684 "is_configured": true, 00:09:43.684 "data_offset": 2048, 00:09:43.684 "data_size": 63488 00:09:43.684 }, 00:09:43.684 { 00:09:43.684 "name": null, 00:09:43.684 "uuid": "66fe5ff1-b15f-448e-8995-beddf44403b5", 00:09:43.684 "is_configured": false, 00:09:43.684 "data_offset": 0, 00:09:43.684 "data_size": 63488 00:09:43.684 }, 00:09:43.684 { 00:09:43.684 "name": null, 00:09:43.684 "uuid": "2517698f-1381-4c11-9c1b-f30246692d85", 00:09:43.684 "is_configured": false, 00:09:43.684 "data_offset": 0, 00:09:43.684 "data_size": 63488 00:09:43.684 } 00:09:43.684 ] 00:09:43.684 }' 00:09:43.684 03:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.684 03:18:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.943 03:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.943 03:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:43.943 03:18:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.943 03:18:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.943 03:18:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.943 03:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:43.943 03:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:43.943 03:18:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.943 03:18:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.943 [2024-11-21 03:18:31.492258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:43.943 03:18:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.943 03:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:43.943 03:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.943 03:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.943 03:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.943 03:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.943 03:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.943 03:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.943 03:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.943 03:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.943 03:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.943 03:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.943 03:18:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.943 03:18:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.943 03:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.202 03:18:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.202 03:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.202 "name": "Existed_Raid", 00:09:44.202 "uuid": "dc20f4f1-483d-4035-afae-6ca573a88908", 00:09:44.202 "strip_size_kb": 64, 00:09:44.202 "state": "configuring", 00:09:44.202 "raid_level": "concat", 00:09:44.202 "superblock": true, 00:09:44.202 "num_base_bdevs": 3, 00:09:44.202 "num_base_bdevs_discovered": 2, 00:09:44.202 "num_base_bdevs_operational": 3, 00:09:44.202 "base_bdevs_list": [ 00:09:44.202 { 00:09:44.202 "name": "BaseBdev1", 00:09:44.202 "uuid": "07657465-5181-4ac9-ad63-c40826bad8b2", 00:09:44.202 "is_configured": true, 00:09:44.202 "data_offset": 2048, 00:09:44.202 "data_size": 63488 00:09:44.202 }, 00:09:44.202 { 00:09:44.202 "name": null, 00:09:44.202 "uuid": "66fe5ff1-b15f-448e-8995-beddf44403b5", 00:09:44.202 "is_configured": false, 00:09:44.202 "data_offset": 0, 00:09:44.202 "data_size": 63488 00:09:44.202 }, 00:09:44.202 { 00:09:44.202 "name": "BaseBdev3", 00:09:44.202 "uuid": "2517698f-1381-4c11-9c1b-f30246692d85", 00:09:44.202 "is_configured": true, 00:09:44.202 "data_offset": 2048, 00:09:44.202 "data_size": 63488 00:09:44.202 } 00:09:44.202 ] 00:09:44.202 }' 00:09:44.202 03:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.202 03:18:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.461 03:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.461 03:18:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.461 03:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:44.461 03:18:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.461 03:18:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.461 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:44.461 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:44.461 03:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.461 03:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.461 [2024-11-21 03:18:32.020436] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:44.721 03:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.721 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:44.721 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.721 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.721 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.721 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.721 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.721 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.721 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.721 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.721 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.721 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.721 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.721 03:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.721 03:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.721 03:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.721 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.721 "name": "Existed_Raid", 00:09:44.721 "uuid": "dc20f4f1-483d-4035-afae-6ca573a88908", 00:09:44.721 "strip_size_kb": 64, 00:09:44.721 "state": "configuring", 00:09:44.721 "raid_level": "concat", 00:09:44.721 "superblock": true, 00:09:44.721 "num_base_bdevs": 3, 00:09:44.721 "num_base_bdevs_discovered": 1, 00:09:44.721 "num_base_bdevs_operational": 3, 00:09:44.721 "base_bdevs_list": [ 00:09:44.721 { 00:09:44.721 "name": null, 00:09:44.721 "uuid": "07657465-5181-4ac9-ad63-c40826bad8b2", 00:09:44.721 "is_configured": false, 00:09:44.721 "data_offset": 0, 00:09:44.721 "data_size": 63488 00:09:44.721 }, 00:09:44.721 { 00:09:44.721 "name": null, 00:09:44.721 "uuid": "66fe5ff1-b15f-448e-8995-beddf44403b5", 00:09:44.721 "is_configured": false, 00:09:44.721 "data_offset": 0, 00:09:44.721 "data_size": 63488 00:09:44.721 }, 00:09:44.721 { 00:09:44.721 "name": "BaseBdev3", 00:09:44.721 "uuid": "2517698f-1381-4c11-9c1b-f30246692d85", 00:09:44.721 "is_configured": true, 00:09:44.721 "data_offset": 2048, 00:09:44.721 "data_size": 63488 00:09:44.721 } 00:09:44.721 ] 00:09:44.721 }' 00:09:44.721 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.721 03:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.980 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.980 03:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.980 03:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.980 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:44.980 03:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.980 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:44.980 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:44.980 03:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.980 03:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.980 [2024-11-21 03:18:32.536457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:44.980 03:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.980 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:44.980 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.980 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.980 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.980 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.980 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.980 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.980 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.980 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.980 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.240 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.240 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.240 03:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.240 03:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.240 03:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.240 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.240 "name": "Existed_Raid", 00:09:45.240 "uuid": "dc20f4f1-483d-4035-afae-6ca573a88908", 00:09:45.240 "strip_size_kb": 64, 00:09:45.240 "state": "configuring", 00:09:45.240 "raid_level": "concat", 00:09:45.240 "superblock": true, 00:09:45.240 "num_base_bdevs": 3, 00:09:45.240 "num_base_bdevs_discovered": 2, 00:09:45.240 "num_base_bdevs_operational": 3, 00:09:45.240 "base_bdevs_list": [ 00:09:45.240 { 00:09:45.240 "name": null, 00:09:45.240 "uuid": "07657465-5181-4ac9-ad63-c40826bad8b2", 00:09:45.240 "is_configured": false, 00:09:45.240 "data_offset": 0, 00:09:45.240 "data_size": 63488 00:09:45.240 }, 00:09:45.240 { 00:09:45.240 "name": "BaseBdev2", 00:09:45.240 "uuid": "66fe5ff1-b15f-448e-8995-beddf44403b5", 00:09:45.240 "is_configured": true, 00:09:45.240 "data_offset": 2048, 00:09:45.240 "data_size": 63488 00:09:45.240 }, 00:09:45.240 { 00:09:45.240 "name": "BaseBdev3", 00:09:45.240 "uuid": "2517698f-1381-4c11-9c1b-f30246692d85", 00:09:45.240 "is_configured": true, 00:09:45.240 "data_offset": 2048, 00:09:45.240 "data_size": 63488 00:09:45.240 } 00:09:45.240 ] 00:09:45.240 }' 00:09:45.240 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.240 03:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.500 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:45.500 03:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.500 03:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.500 03:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.500 03:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.500 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:45.500 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.500 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.500 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.500 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:45.500 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.500 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 07657465-5181-4ac9-ad63-c40826bad8b2 00:09:45.500 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.500 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.760 [2024-11-21 03:18:33.077555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:45.760 [2024-11-21 03:18:33.077746] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:45.760 [2024-11-21 03:18:33.077759] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:45.760 [2024-11-21 03:18:33.078077] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:09:45.760 [2024-11-21 03:18:33.078211] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:45.760 [2024-11-21 03:18:33.078228] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:45.760 [2024-11-21 03:18:33.078350] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.760 NewBaseBdev 00:09:45.760 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.760 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:45.760 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:45.760 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:45.760 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:45.760 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:45.760 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:45.760 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:45.760 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.760 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.760 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.760 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:45.760 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.760 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.760 [ 00:09:45.760 { 00:09:45.760 "name": "NewBaseBdev", 00:09:45.760 "aliases": [ 00:09:45.760 "07657465-5181-4ac9-ad63-c40826bad8b2" 00:09:45.760 ], 00:09:45.760 "product_name": "Malloc disk", 00:09:45.760 "block_size": 512, 00:09:45.760 "num_blocks": 65536, 00:09:45.760 "uuid": "07657465-5181-4ac9-ad63-c40826bad8b2", 00:09:45.760 "assigned_rate_limits": { 00:09:45.760 "rw_ios_per_sec": 0, 00:09:45.760 "rw_mbytes_per_sec": 0, 00:09:45.760 "r_mbytes_per_sec": 0, 00:09:45.760 "w_mbytes_per_sec": 0 00:09:45.760 }, 00:09:45.760 "claimed": true, 00:09:45.760 "claim_type": "exclusive_write", 00:09:45.760 "zoned": false, 00:09:45.760 "supported_io_types": { 00:09:45.760 "read": true, 00:09:45.760 "write": true, 00:09:45.760 "unmap": true, 00:09:45.760 "flush": true, 00:09:45.760 "reset": true, 00:09:45.760 "nvme_admin": false, 00:09:45.760 "nvme_io": false, 00:09:45.760 "nvme_io_md": false, 00:09:45.760 "write_zeroes": true, 00:09:45.760 "zcopy": true, 00:09:45.760 "get_zone_info": false, 00:09:45.760 "zone_management": false, 00:09:45.760 "zone_append": false, 00:09:45.760 "compare": false, 00:09:45.760 "compare_and_write": false, 00:09:45.760 "abort": true, 00:09:45.760 "seek_hole": false, 00:09:45.760 "seek_data": false, 00:09:45.760 "copy": true, 00:09:45.760 "nvme_iov_md": false 00:09:45.760 }, 00:09:45.760 "memory_domains": [ 00:09:45.760 { 00:09:45.760 "dma_device_id": "system", 00:09:45.760 "dma_device_type": 1 00:09:45.760 }, 00:09:45.760 { 00:09:45.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.760 "dma_device_type": 2 00:09:45.760 } 00:09:45.760 ], 00:09:45.760 "driver_specific": {} 00:09:45.760 } 00:09:45.760 ] 00:09:45.760 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.760 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:45.760 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:45.760 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.760 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.760 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.760 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.760 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.760 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.760 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.760 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.760 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.760 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.760 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.760 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.760 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.761 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.761 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.761 "name": "Existed_Raid", 00:09:45.761 "uuid": "dc20f4f1-483d-4035-afae-6ca573a88908", 00:09:45.761 "strip_size_kb": 64, 00:09:45.761 "state": "online", 00:09:45.761 "raid_level": "concat", 00:09:45.761 "superblock": true, 00:09:45.761 "num_base_bdevs": 3, 00:09:45.761 "num_base_bdevs_discovered": 3, 00:09:45.761 "num_base_bdevs_operational": 3, 00:09:45.761 "base_bdevs_list": [ 00:09:45.761 { 00:09:45.761 "name": "NewBaseBdev", 00:09:45.761 "uuid": "07657465-5181-4ac9-ad63-c40826bad8b2", 00:09:45.761 "is_configured": true, 00:09:45.761 "data_offset": 2048, 00:09:45.761 "data_size": 63488 00:09:45.761 }, 00:09:45.761 { 00:09:45.761 "name": "BaseBdev2", 00:09:45.761 "uuid": "66fe5ff1-b15f-448e-8995-beddf44403b5", 00:09:45.761 "is_configured": true, 00:09:45.761 "data_offset": 2048, 00:09:45.761 "data_size": 63488 00:09:45.761 }, 00:09:45.761 { 00:09:45.761 "name": "BaseBdev3", 00:09:45.761 "uuid": "2517698f-1381-4c11-9c1b-f30246692d85", 00:09:45.761 "is_configured": true, 00:09:45.761 "data_offset": 2048, 00:09:45.761 "data_size": 63488 00:09:45.761 } 00:09:45.761 ] 00:09:45.761 }' 00:09:45.761 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.761 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.020 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:46.020 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:46.020 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:46.020 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:46.020 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:46.020 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:46.020 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:46.020 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:46.020 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.020 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.020 [2024-11-21 03:18:33.558097] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.020 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.281 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:46.281 "name": "Existed_Raid", 00:09:46.281 "aliases": [ 00:09:46.281 "dc20f4f1-483d-4035-afae-6ca573a88908" 00:09:46.281 ], 00:09:46.281 "product_name": "Raid Volume", 00:09:46.281 "block_size": 512, 00:09:46.281 "num_blocks": 190464, 00:09:46.281 "uuid": "dc20f4f1-483d-4035-afae-6ca573a88908", 00:09:46.281 "assigned_rate_limits": { 00:09:46.281 "rw_ios_per_sec": 0, 00:09:46.281 "rw_mbytes_per_sec": 0, 00:09:46.281 "r_mbytes_per_sec": 0, 00:09:46.281 "w_mbytes_per_sec": 0 00:09:46.281 }, 00:09:46.281 "claimed": false, 00:09:46.281 "zoned": false, 00:09:46.281 "supported_io_types": { 00:09:46.281 "read": true, 00:09:46.281 "write": true, 00:09:46.281 "unmap": true, 00:09:46.281 "flush": true, 00:09:46.281 "reset": true, 00:09:46.281 "nvme_admin": false, 00:09:46.281 "nvme_io": false, 00:09:46.281 "nvme_io_md": false, 00:09:46.281 "write_zeroes": true, 00:09:46.281 "zcopy": false, 00:09:46.281 "get_zone_info": false, 00:09:46.281 "zone_management": false, 00:09:46.281 "zone_append": false, 00:09:46.281 "compare": false, 00:09:46.281 "compare_and_write": false, 00:09:46.281 "abort": false, 00:09:46.282 "seek_hole": false, 00:09:46.282 "seek_data": false, 00:09:46.282 "copy": false, 00:09:46.282 "nvme_iov_md": false 00:09:46.282 }, 00:09:46.282 "memory_domains": [ 00:09:46.282 { 00:09:46.282 "dma_device_id": "system", 00:09:46.282 "dma_device_type": 1 00:09:46.282 }, 00:09:46.282 { 00:09:46.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.282 "dma_device_type": 2 00:09:46.282 }, 00:09:46.282 { 00:09:46.282 "dma_device_id": "system", 00:09:46.282 "dma_device_type": 1 00:09:46.282 }, 00:09:46.282 { 00:09:46.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.282 "dma_device_type": 2 00:09:46.282 }, 00:09:46.282 { 00:09:46.282 "dma_device_id": "system", 00:09:46.282 "dma_device_type": 1 00:09:46.282 }, 00:09:46.282 { 00:09:46.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.282 "dma_device_type": 2 00:09:46.282 } 00:09:46.282 ], 00:09:46.282 "driver_specific": { 00:09:46.282 "raid": { 00:09:46.282 "uuid": "dc20f4f1-483d-4035-afae-6ca573a88908", 00:09:46.282 "strip_size_kb": 64, 00:09:46.282 "state": "online", 00:09:46.282 "raid_level": "concat", 00:09:46.282 "superblock": true, 00:09:46.282 "num_base_bdevs": 3, 00:09:46.282 "num_base_bdevs_discovered": 3, 00:09:46.282 "num_base_bdevs_operational": 3, 00:09:46.282 "base_bdevs_list": [ 00:09:46.282 { 00:09:46.282 "name": "NewBaseBdev", 00:09:46.282 "uuid": "07657465-5181-4ac9-ad63-c40826bad8b2", 00:09:46.282 "is_configured": true, 00:09:46.282 "data_offset": 2048, 00:09:46.282 "data_size": 63488 00:09:46.282 }, 00:09:46.282 { 00:09:46.282 "name": "BaseBdev2", 00:09:46.282 "uuid": "66fe5ff1-b15f-448e-8995-beddf44403b5", 00:09:46.282 "is_configured": true, 00:09:46.282 "data_offset": 2048, 00:09:46.282 "data_size": 63488 00:09:46.282 }, 00:09:46.282 { 00:09:46.282 "name": "BaseBdev3", 00:09:46.282 "uuid": "2517698f-1381-4c11-9c1b-f30246692d85", 00:09:46.282 "is_configured": true, 00:09:46.282 "data_offset": 2048, 00:09:46.282 "data_size": 63488 00:09:46.282 } 00:09:46.282 ] 00:09:46.282 } 00:09:46.282 } 00:09:46.282 }' 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:46.282 BaseBdev2 00:09:46.282 BaseBdev3' 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.282 [2024-11-21 03:18:33.833840] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:46.282 [2024-11-21 03:18:33.833880] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:46.282 [2024-11-21 03:18:33.833966] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:46.282 [2024-11-21 03:18:33.834047] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:46.282 [2024-11-21 03:18:33.834059] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 79371 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 79371 ']' 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 79371 00:09:46.282 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:46.542 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:46.542 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79371 00:09:46.542 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:46.542 killing process with pid 79371 00:09:46.542 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:46.542 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79371' 00:09:46.542 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 79371 00:09:46.542 [2024-11-21 03:18:33.882637] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:46.542 03:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 79371 00:09:46.542 [2024-11-21 03:18:33.943154] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:46.801 03:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:46.801 00:09:46.801 real 0m9.174s 00:09:46.801 user 0m15.389s 00:09:46.801 sys 0m1.999s 00:09:46.801 03:18:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.801 03:18:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.801 ************************************ 00:09:46.801 END TEST raid_state_function_test_sb 00:09:46.801 ************************************ 00:09:46.801 03:18:34 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:46.801 03:18:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:46.801 03:18:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.801 03:18:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:46.801 ************************************ 00:09:46.801 START TEST raid_superblock_test 00:09:46.801 ************************************ 00:09:46.801 03:18:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:09:46.801 03:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:46.801 03:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:46.801 03:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:46.801 03:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:46.801 03:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:46.801 03:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:46.801 03:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:46.801 03:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:46.801 03:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:46.801 03:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:46.801 03:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:46.801 03:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:46.801 03:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:46.801 03:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:46.801 03:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:46.801 03:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:46.801 03:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=79980 00:09:46.801 03:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:46.801 03:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 79980 00:09:46.801 03:18:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 79980 ']' 00:09:46.801 03:18:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.801 03:18:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.801 03:18:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.801 03:18:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.801 03:18:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.060 [2024-11-21 03:18:34.438586] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:09:47.060 [2024-11-21 03:18:34.438833] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79980 ] 00:09:47.060 [2024-11-21 03:18:34.580229] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:47.060 [2024-11-21 03:18:34.619317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.319 [2024-11-21 03:18:34.661474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.319 [2024-11-21 03:18:34.737876] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.319 [2024-11-21 03:18:34.738047] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.887 malloc1 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.887 [2024-11-21 03:18:35.334752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:47.887 [2024-11-21 03:18:35.334869] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.887 [2024-11-21 03:18:35.334929] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:47.887 [2024-11-21 03:18:35.334969] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.887 [2024-11-21 03:18:35.337577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.887 [2024-11-21 03:18:35.337647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:47.887 pt1 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.887 malloc2 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.887 [2024-11-21 03:18:35.374129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:47.887 [2024-11-21 03:18:35.374186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.887 [2024-11-21 03:18:35.374207] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:47.887 [2024-11-21 03:18:35.374216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.887 [2024-11-21 03:18:35.376722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.887 [2024-11-21 03:18:35.376808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:47.887 pt2 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.887 malloc3 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.887 [2024-11-21 03:18:35.409355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:47.887 [2024-11-21 03:18:35.409447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.887 [2024-11-21 03:18:35.409505] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:47.887 [2024-11-21 03:18:35.409539] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.887 [2024-11-21 03:18:35.411967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.887 [2024-11-21 03:18:35.412071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:47.887 pt3 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.887 [2024-11-21 03:18:35.421408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:47.887 [2024-11-21 03:18:35.423604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:47.887 [2024-11-21 03:18:35.423720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:47.887 [2024-11-21 03:18:35.423904] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:09:47.887 [2024-11-21 03:18:35.423957] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:47.887 [2024-11-21 03:18:35.424270] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:47.887 [2024-11-21 03:18:35.424490] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:09:47.887 [2024-11-21 03:18:35.424532] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:09:47.887 [2024-11-21 03:18:35.424702] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.887 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.888 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.888 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.888 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.888 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.888 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.888 03:18:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.888 03:18:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.146 03:18:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.146 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.146 "name": "raid_bdev1", 00:09:48.146 "uuid": "4429fbe6-7019-4ce3-a871-ab01951b510b", 00:09:48.146 "strip_size_kb": 64, 00:09:48.146 "state": "online", 00:09:48.146 "raid_level": "concat", 00:09:48.146 "superblock": true, 00:09:48.146 "num_base_bdevs": 3, 00:09:48.146 "num_base_bdevs_discovered": 3, 00:09:48.146 "num_base_bdevs_operational": 3, 00:09:48.146 "base_bdevs_list": [ 00:09:48.146 { 00:09:48.146 "name": "pt1", 00:09:48.146 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:48.146 "is_configured": true, 00:09:48.146 "data_offset": 2048, 00:09:48.146 "data_size": 63488 00:09:48.146 }, 00:09:48.146 { 00:09:48.146 "name": "pt2", 00:09:48.146 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.146 "is_configured": true, 00:09:48.146 "data_offset": 2048, 00:09:48.146 "data_size": 63488 00:09:48.146 }, 00:09:48.146 { 00:09:48.146 "name": "pt3", 00:09:48.146 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:48.146 "is_configured": true, 00:09:48.146 "data_offset": 2048, 00:09:48.146 "data_size": 63488 00:09:48.146 } 00:09:48.146 ] 00:09:48.146 }' 00:09:48.146 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.146 03:18:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.406 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:48.406 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:48.406 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:48.406 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:48.406 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:48.406 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:48.406 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:48.406 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:48.406 03:18:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.406 03:18:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.406 [2024-11-21 03:18:35.881817] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:48.406 03:18:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.406 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:48.406 "name": "raid_bdev1", 00:09:48.406 "aliases": [ 00:09:48.406 "4429fbe6-7019-4ce3-a871-ab01951b510b" 00:09:48.406 ], 00:09:48.406 "product_name": "Raid Volume", 00:09:48.406 "block_size": 512, 00:09:48.406 "num_blocks": 190464, 00:09:48.406 "uuid": "4429fbe6-7019-4ce3-a871-ab01951b510b", 00:09:48.406 "assigned_rate_limits": { 00:09:48.406 "rw_ios_per_sec": 0, 00:09:48.406 "rw_mbytes_per_sec": 0, 00:09:48.406 "r_mbytes_per_sec": 0, 00:09:48.406 "w_mbytes_per_sec": 0 00:09:48.406 }, 00:09:48.406 "claimed": false, 00:09:48.406 "zoned": false, 00:09:48.406 "supported_io_types": { 00:09:48.406 "read": true, 00:09:48.406 "write": true, 00:09:48.406 "unmap": true, 00:09:48.406 "flush": true, 00:09:48.406 "reset": true, 00:09:48.406 "nvme_admin": false, 00:09:48.406 "nvme_io": false, 00:09:48.406 "nvme_io_md": false, 00:09:48.406 "write_zeroes": true, 00:09:48.406 "zcopy": false, 00:09:48.406 "get_zone_info": false, 00:09:48.406 "zone_management": false, 00:09:48.406 "zone_append": false, 00:09:48.406 "compare": false, 00:09:48.406 "compare_and_write": false, 00:09:48.406 "abort": false, 00:09:48.406 "seek_hole": false, 00:09:48.406 "seek_data": false, 00:09:48.406 "copy": false, 00:09:48.406 "nvme_iov_md": false 00:09:48.406 }, 00:09:48.406 "memory_domains": [ 00:09:48.406 { 00:09:48.406 "dma_device_id": "system", 00:09:48.406 "dma_device_type": 1 00:09:48.406 }, 00:09:48.406 { 00:09:48.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.406 "dma_device_type": 2 00:09:48.406 }, 00:09:48.406 { 00:09:48.406 "dma_device_id": "system", 00:09:48.406 "dma_device_type": 1 00:09:48.406 }, 00:09:48.406 { 00:09:48.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.406 "dma_device_type": 2 00:09:48.406 }, 00:09:48.406 { 00:09:48.406 "dma_device_id": "system", 00:09:48.406 "dma_device_type": 1 00:09:48.406 }, 00:09:48.406 { 00:09:48.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.406 "dma_device_type": 2 00:09:48.406 } 00:09:48.406 ], 00:09:48.406 "driver_specific": { 00:09:48.406 "raid": { 00:09:48.406 "uuid": "4429fbe6-7019-4ce3-a871-ab01951b510b", 00:09:48.406 "strip_size_kb": 64, 00:09:48.406 "state": "online", 00:09:48.406 "raid_level": "concat", 00:09:48.406 "superblock": true, 00:09:48.406 "num_base_bdevs": 3, 00:09:48.406 "num_base_bdevs_discovered": 3, 00:09:48.406 "num_base_bdevs_operational": 3, 00:09:48.406 "base_bdevs_list": [ 00:09:48.406 { 00:09:48.406 "name": "pt1", 00:09:48.406 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:48.406 "is_configured": true, 00:09:48.406 "data_offset": 2048, 00:09:48.406 "data_size": 63488 00:09:48.406 }, 00:09:48.406 { 00:09:48.406 "name": "pt2", 00:09:48.406 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.406 "is_configured": true, 00:09:48.406 "data_offset": 2048, 00:09:48.406 "data_size": 63488 00:09:48.406 }, 00:09:48.406 { 00:09:48.406 "name": "pt3", 00:09:48.406 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:48.406 "is_configured": true, 00:09:48.406 "data_offset": 2048, 00:09:48.406 "data_size": 63488 00:09:48.406 } 00:09:48.406 ] 00:09:48.406 } 00:09:48.406 } 00:09:48.406 }' 00:09:48.406 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:48.665 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:48.665 pt2 00:09:48.665 pt3' 00:09:48.665 03:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:48.665 [2024-11-21 03:18:36.181866] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4429fbe6-7019-4ce3-a871-ab01951b510b 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4429fbe6-7019-4ce3-a871-ab01951b510b ']' 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.665 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.924 [2024-11-21 03:18:36.229577] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:48.924 [2024-11-21 03:18:36.229656] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:48.924 [2024-11-21 03:18:36.229759] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:48.924 [2024-11-21 03:18:36.229877] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:48.924 [2024-11-21 03:18:36.229931] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:09:48.924 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.924 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.924 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.924 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:48.924 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.924 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.924 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:48.924 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:48.924 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:48.924 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:48.924 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.924 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.924 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.924 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:48.924 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:48.924 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.924 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.924 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.924 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:48.924 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:48.924 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.924 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.924 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.924 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:48.924 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:48.924 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.924 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.925 [2024-11-21 03:18:36.381702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:48.925 [2024-11-21 03:18:36.384145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:48.925 [2024-11-21 03:18:36.384199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:48.925 [2024-11-21 03:18:36.384252] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:48.925 [2024-11-21 03:18:36.384309] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:48.925 [2024-11-21 03:18:36.384330] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:48.925 [2024-11-21 03:18:36.384345] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:48.925 [2024-11-21 03:18:36.384364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:09:48.925 request: 00:09:48.925 { 00:09:48.925 "name": "raid_bdev1", 00:09:48.925 "raid_level": "concat", 00:09:48.925 "base_bdevs": [ 00:09:48.925 "malloc1", 00:09:48.925 "malloc2", 00:09:48.925 "malloc3" 00:09:48.925 ], 00:09:48.925 "strip_size_kb": 64, 00:09:48.925 "superblock": false, 00:09:48.925 "method": "bdev_raid_create", 00:09:48.925 "req_id": 1 00:09:48.925 } 00:09:48.925 Got JSON-RPC error response 00:09:48.925 response: 00:09:48.925 { 00:09:48.925 "code": -17, 00:09:48.925 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:48.925 } 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.925 [2024-11-21 03:18:36.445634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:48.925 [2024-11-21 03:18:36.445689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.925 [2024-11-21 03:18:36.445709] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:48.925 [2024-11-21 03:18:36.445718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.925 [2024-11-21 03:18:36.448356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.925 [2024-11-21 03:18:36.448392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:48.925 [2024-11-21 03:18:36.448474] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:48.925 [2024-11-21 03:18:36.448522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:48.925 pt1 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.925 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.184 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.184 "name": "raid_bdev1", 00:09:49.184 "uuid": "4429fbe6-7019-4ce3-a871-ab01951b510b", 00:09:49.184 "strip_size_kb": 64, 00:09:49.184 "state": "configuring", 00:09:49.184 "raid_level": "concat", 00:09:49.184 "superblock": true, 00:09:49.184 "num_base_bdevs": 3, 00:09:49.184 "num_base_bdevs_discovered": 1, 00:09:49.184 "num_base_bdevs_operational": 3, 00:09:49.184 "base_bdevs_list": [ 00:09:49.184 { 00:09:49.184 "name": "pt1", 00:09:49.184 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:49.184 "is_configured": true, 00:09:49.184 "data_offset": 2048, 00:09:49.184 "data_size": 63488 00:09:49.184 }, 00:09:49.184 { 00:09:49.184 "name": null, 00:09:49.184 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:49.184 "is_configured": false, 00:09:49.184 "data_offset": 2048, 00:09:49.184 "data_size": 63488 00:09:49.184 }, 00:09:49.184 { 00:09:49.184 "name": null, 00:09:49.184 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:49.184 "is_configured": false, 00:09:49.184 "data_offset": 2048, 00:09:49.184 "data_size": 63488 00:09:49.184 } 00:09:49.184 ] 00:09:49.184 }' 00:09:49.184 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.184 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.443 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:49.443 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:49.443 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.443 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.443 [2024-11-21 03:18:36.905795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:49.443 [2024-11-21 03:18:36.905917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.443 [2024-11-21 03:18:36.905993] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:49.443 [2024-11-21 03:18:36.906042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.443 [2024-11-21 03:18:36.906586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.443 [2024-11-21 03:18:36.906646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:49.443 [2024-11-21 03:18:36.906772] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:49.443 [2024-11-21 03:18:36.906827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:49.443 pt2 00:09:49.443 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.443 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:49.443 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.443 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.443 [2024-11-21 03:18:36.917831] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:49.443 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.443 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:49.443 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.443 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.443 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.443 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.443 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.443 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.443 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.443 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.443 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.443 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.443 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.443 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.443 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.443 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.443 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.443 "name": "raid_bdev1", 00:09:49.443 "uuid": "4429fbe6-7019-4ce3-a871-ab01951b510b", 00:09:49.443 "strip_size_kb": 64, 00:09:49.443 "state": "configuring", 00:09:49.443 "raid_level": "concat", 00:09:49.443 "superblock": true, 00:09:49.443 "num_base_bdevs": 3, 00:09:49.443 "num_base_bdevs_discovered": 1, 00:09:49.443 "num_base_bdevs_operational": 3, 00:09:49.443 "base_bdevs_list": [ 00:09:49.443 { 00:09:49.443 "name": "pt1", 00:09:49.443 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:49.443 "is_configured": true, 00:09:49.443 "data_offset": 2048, 00:09:49.443 "data_size": 63488 00:09:49.443 }, 00:09:49.443 { 00:09:49.443 "name": null, 00:09:49.443 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:49.443 "is_configured": false, 00:09:49.443 "data_offset": 0, 00:09:49.443 "data_size": 63488 00:09:49.443 }, 00:09:49.443 { 00:09:49.443 "name": null, 00:09:49.443 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:49.443 "is_configured": false, 00:09:49.443 "data_offset": 2048, 00:09:49.443 "data_size": 63488 00:09:49.443 } 00:09:49.443 ] 00:09:49.443 }' 00:09:49.443 03:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.443 03:18:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.009 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:50.009 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:50.009 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:50.009 03:18:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.009 03:18:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.009 [2024-11-21 03:18:37.353959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:50.009 [2024-11-21 03:18:37.354141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.009 [2024-11-21 03:18:37.354167] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:50.009 [2024-11-21 03:18:37.354198] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.009 [2024-11-21 03:18:37.354756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.009 [2024-11-21 03:18:37.354780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:50.009 [2024-11-21 03:18:37.354877] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:50.009 [2024-11-21 03:18:37.354909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:50.009 pt2 00:09:50.009 03:18:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.009 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:50.009 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:50.009 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:50.009 03:18:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.009 03:18:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.009 [2024-11-21 03:18:37.365888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:50.009 [2024-11-21 03:18:37.365945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.009 [2024-11-21 03:18:37.365960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:50.009 [2024-11-21 03:18:37.365970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.009 [2024-11-21 03:18:37.366375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.009 [2024-11-21 03:18:37.366400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:50.009 [2024-11-21 03:18:37.366460] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:50.009 [2024-11-21 03:18:37.366483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:50.009 [2024-11-21 03:18:37.366579] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:50.010 [2024-11-21 03:18:37.366591] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:50.010 [2024-11-21 03:18:37.366857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:09:50.010 [2024-11-21 03:18:37.366978] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:50.010 [2024-11-21 03:18:37.366986] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:09:50.010 [2024-11-21 03:18:37.367113] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.010 pt3 00:09:50.010 03:18:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.010 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:50.010 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:50.010 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:50.010 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.010 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.010 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.010 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.010 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.010 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.010 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.010 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.010 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.010 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.010 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.010 03:18:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.010 03:18:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.010 03:18:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.010 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.010 "name": "raid_bdev1", 00:09:50.010 "uuid": "4429fbe6-7019-4ce3-a871-ab01951b510b", 00:09:50.010 "strip_size_kb": 64, 00:09:50.010 "state": "online", 00:09:50.010 "raid_level": "concat", 00:09:50.010 "superblock": true, 00:09:50.010 "num_base_bdevs": 3, 00:09:50.010 "num_base_bdevs_discovered": 3, 00:09:50.010 "num_base_bdevs_operational": 3, 00:09:50.010 "base_bdevs_list": [ 00:09:50.010 { 00:09:50.010 "name": "pt1", 00:09:50.010 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:50.010 "is_configured": true, 00:09:50.010 "data_offset": 2048, 00:09:50.010 "data_size": 63488 00:09:50.010 }, 00:09:50.010 { 00:09:50.010 "name": "pt2", 00:09:50.010 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:50.010 "is_configured": true, 00:09:50.010 "data_offset": 2048, 00:09:50.010 "data_size": 63488 00:09:50.010 }, 00:09:50.010 { 00:09:50.010 "name": "pt3", 00:09:50.010 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:50.010 "is_configured": true, 00:09:50.010 "data_offset": 2048, 00:09:50.010 "data_size": 63488 00:09:50.010 } 00:09:50.010 ] 00:09:50.010 }' 00:09:50.010 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.010 03:18:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.269 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:50.269 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:50.269 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:50.269 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:50.269 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:50.269 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:50.269 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:50.269 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:50.269 03:18:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.269 03:18:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.528 [2024-11-21 03:18:37.838404] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:50.528 03:18:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.528 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:50.528 "name": "raid_bdev1", 00:09:50.528 "aliases": [ 00:09:50.528 "4429fbe6-7019-4ce3-a871-ab01951b510b" 00:09:50.528 ], 00:09:50.528 "product_name": "Raid Volume", 00:09:50.528 "block_size": 512, 00:09:50.528 "num_blocks": 190464, 00:09:50.528 "uuid": "4429fbe6-7019-4ce3-a871-ab01951b510b", 00:09:50.528 "assigned_rate_limits": { 00:09:50.528 "rw_ios_per_sec": 0, 00:09:50.528 "rw_mbytes_per_sec": 0, 00:09:50.528 "r_mbytes_per_sec": 0, 00:09:50.528 "w_mbytes_per_sec": 0 00:09:50.528 }, 00:09:50.528 "claimed": false, 00:09:50.528 "zoned": false, 00:09:50.528 "supported_io_types": { 00:09:50.528 "read": true, 00:09:50.528 "write": true, 00:09:50.528 "unmap": true, 00:09:50.528 "flush": true, 00:09:50.528 "reset": true, 00:09:50.528 "nvme_admin": false, 00:09:50.528 "nvme_io": false, 00:09:50.528 "nvme_io_md": false, 00:09:50.528 "write_zeroes": true, 00:09:50.528 "zcopy": false, 00:09:50.528 "get_zone_info": false, 00:09:50.528 "zone_management": false, 00:09:50.528 "zone_append": false, 00:09:50.528 "compare": false, 00:09:50.528 "compare_and_write": false, 00:09:50.528 "abort": false, 00:09:50.528 "seek_hole": false, 00:09:50.528 "seek_data": false, 00:09:50.528 "copy": false, 00:09:50.528 "nvme_iov_md": false 00:09:50.528 }, 00:09:50.528 "memory_domains": [ 00:09:50.528 { 00:09:50.528 "dma_device_id": "system", 00:09:50.528 "dma_device_type": 1 00:09:50.528 }, 00:09:50.528 { 00:09:50.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.528 "dma_device_type": 2 00:09:50.528 }, 00:09:50.528 { 00:09:50.528 "dma_device_id": "system", 00:09:50.528 "dma_device_type": 1 00:09:50.528 }, 00:09:50.528 { 00:09:50.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.528 "dma_device_type": 2 00:09:50.528 }, 00:09:50.528 { 00:09:50.528 "dma_device_id": "system", 00:09:50.528 "dma_device_type": 1 00:09:50.528 }, 00:09:50.528 { 00:09:50.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.528 "dma_device_type": 2 00:09:50.528 } 00:09:50.528 ], 00:09:50.528 "driver_specific": { 00:09:50.528 "raid": { 00:09:50.528 "uuid": "4429fbe6-7019-4ce3-a871-ab01951b510b", 00:09:50.528 "strip_size_kb": 64, 00:09:50.528 "state": "online", 00:09:50.528 "raid_level": "concat", 00:09:50.528 "superblock": true, 00:09:50.528 "num_base_bdevs": 3, 00:09:50.528 "num_base_bdevs_discovered": 3, 00:09:50.528 "num_base_bdevs_operational": 3, 00:09:50.528 "base_bdevs_list": [ 00:09:50.528 { 00:09:50.528 "name": "pt1", 00:09:50.528 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:50.528 "is_configured": true, 00:09:50.528 "data_offset": 2048, 00:09:50.528 "data_size": 63488 00:09:50.528 }, 00:09:50.528 { 00:09:50.528 "name": "pt2", 00:09:50.528 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:50.528 "is_configured": true, 00:09:50.528 "data_offset": 2048, 00:09:50.528 "data_size": 63488 00:09:50.528 }, 00:09:50.528 { 00:09:50.528 "name": "pt3", 00:09:50.528 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:50.528 "is_configured": true, 00:09:50.528 "data_offset": 2048, 00:09:50.528 "data_size": 63488 00:09:50.528 } 00:09:50.528 ] 00:09:50.528 } 00:09:50.528 } 00:09:50.528 }' 00:09:50.528 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:50.528 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:50.528 pt2 00:09:50.528 pt3' 00:09:50.528 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.528 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:50.528 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.528 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.528 03:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:50.528 03:18:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.529 03:18:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.529 03:18:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.529 03:18:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.529 03:18:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.529 03:18:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.529 03:18:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:50.529 03:18:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.529 03:18:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.529 03:18:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.529 03:18:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.529 03:18:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.529 03:18:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.529 03:18:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.529 03:18:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:50.529 03:18:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.529 03:18:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.529 03:18:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.788 03:18:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.788 03:18:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.788 03:18:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.788 03:18:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:50.788 03:18:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:50.788 03:18:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.788 03:18:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.788 [2024-11-21 03:18:38.134417] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:50.788 03:18:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.788 03:18:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4429fbe6-7019-4ce3-a871-ab01951b510b '!=' 4429fbe6-7019-4ce3-a871-ab01951b510b ']' 00:09:50.788 03:18:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:50.788 03:18:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:50.788 03:18:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:50.788 03:18:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 79980 00:09:50.788 03:18:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 79980 ']' 00:09:50.788 03:18:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 79980 00:09:50.788 03:18:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:50.788 03:18:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:50.788 03:18:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79980 00:09:50.788 03:18:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:50.788 03:18:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:50.788 killing process with pid 79980 00:09:50.788 03:18:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79980' 00:09:50.788 03:18:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 79980 00:09:50.788 [2024-11-21 03:18:38.204998] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:50.788 [2024-11-21 03:18:38.205130] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:50.788 03:18:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 79980 00:09:50.788 [2024-11-21 03:18:38.205208] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:50.788 [2024-11-21 03:18:38.205223] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:09:50.788 [2024-11-21 03:18:38.266263] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:51.047 ************************************ 00:09:51.047 END TEST raid_superblock_test 00:09:51.047 ************************************ 00:09:51.047 03:18:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:51.047 00:09:51.047 real 0m4.254s 00:09:51.047 user 0m6.558s 00:09:51.047 sys 0m0.993s 00:09:51.047 03:18:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.047 03:18:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.306 03:18:38 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:51.306 03:18:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:51.306 03:18:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.306 03:18:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:51.306 ************************************ 00:09:51.306 START TEST raid_read_error_test 00:09:51.306 ************************************ 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6gpIeC9eCw 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80222 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80222 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 80222 ']' 00:09:51.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:51.306 03:18:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.306 [2024-11-21 03:18:38.786369] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:09:51.306 [2024-11-21 03:18:38.786648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80222 ] 00:09:51.572 [2024-11-21 03:18:38.929386] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:51.572 [2024-11-21 03:18:38.969144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.572 [2024-11-21 03:18:39.011793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.572 [2024-11-21 03:18:39.089669] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:51.572 [2024-11-21 03:18:39.089827] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.148 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:52.148 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:52.148 03:18:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:52.148 03:18:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:52.148 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.148 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.148 BaseBdev1_malloc 00:09:52.148 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.148 03:18:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:52.148 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.148 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.148 true 00:09:52.148 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.148 03:18:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:52.148 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.148 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.148 [2024-11-21 03:18:39.677117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:52.148 [2024-11-21 03:18:39.677247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.148 [2024-11-21 03:18:39.677287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:52.148 [2024-11-21 03:18:39.677325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.148 [2024-11-21 03:18:39.679822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.148 [2024-11-21 03:18:39.679903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:52.148 BaseBdev1 00:09:52.148 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.148 03:18:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:52.148 03:18:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:52.148 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.148 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.148 BaseBdev2_malloc 00:09:52.148 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.148 03:18:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:52.148 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.148 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.407 true 00:09:52.407 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.407 03:18:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:52.407 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.407 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.407 [2024-11-21 03:18:39.719789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:52.407 [2024-11-21 03:18:39.719923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.407 [2024-11-21 03:18:39.719957] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:52.407 [2024-11-21 03:18:39.719989] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.407 [2024-11-21 03:18:39.722383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.407 [2024-11-21 03:18:39.722452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:52.407 BaseBdev2 00:09:52.407 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.407 03:18:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:52.407 03:18:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:52.407 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.407 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.407 BaseBdev3_malloc 00:09:52.407 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.407 03:18:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:52.407 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.407 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.407 true 00:09:52.407 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.407 03:18:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:52.407 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.407 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.407 [2024-11-21 03:18:39.766527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:52.407 [2024-11-21 03:18:39.766661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.407 [2024-11-21 03:18:39.766696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:52.407 [2024-11-21 03:18:39.766728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.407 [2024-11-21 03:18:39.769152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.407 [2024-11-21 03:18:39.769225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:52.407 BaseBdev3 00:09:52.407 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.407 03:18:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:52.407 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.407 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.408 [2024-11-21 03:18:39.778557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:52.408 [2024-11-21 03:18:39.780723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:52.408 [2024-11-21 03:18:39.780860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:52.408 [2024-11-21 03:18:39.781084] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:52.408 [2024-11-21 03:18:39.781098] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:52.408 [2024-11-21 03:18:39.781368] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006970 00:09:52.408 [2024-11-21 03:18:39.781516] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:52.408 [2024-11-21 03:18:39.781530] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:52.408 [2024-11-21 03:18:39.781655] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.408 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.408 03:18:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:52.408 03:18:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:52.408 03:18:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.408 03:18:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:52.408 03:18:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.408 03:18:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.408 03:18:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.408 03:18:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.408 03:18:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.408 03:18:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.408 03:18:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.408 03:18:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.408 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.408 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.408 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.408 03:18:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.408 "name": "raid_bdev1", 00:09:52.408 "uuid": "8961cbfe-f366-4669-a98d-88ab862038d1", 00:09:52.408 "strip_size_kb": 64, 00:09:52.408 "state": "online", 00:09:52.408 "raid_level": "concat", 00:09:52.408 "superblock": true, 00:09:52.408 "num_base_bdevs": 3, 00:09:52.408 "num_base_bdevs_discovered": 3, 00:09:52.408 "num_base_bdevs_operational": 3, 00:09:52.408 "base_bdevs_list": [ 00:09:52.408 { 00:09:52.408 "name": "BaseBdev1", 00:09:52.408 "uuid": "8a44799e-841b-5b0e-9e33-82e06e74c2f9", 00:09:52.408 "is_configured": true, 00:09:52.408 "data_offset": 2048, 00:09:52.408 "data_size": 63488 00:09:52.408 }, 00:09:52.408 { 00:09:52.408 "name": "BaseBdev2", 00:09:52.408 "uuid": "a5b237b4-72f0-5c3a-ba43-8e6c75d13d07", 00:09:52.408 "is_configured": true, 00:09:52.408 "data_offset": 2048, 00:09:52.408 "data_size": 63488 00:09:52.408 }, 00:09:52.408 { 00:09:52.408 "name": "BaseBdev3", 00:09:52.408 "uuid": "5ef90046-16e3-5498-83ca-abb960900a03", 00:09:52.408 "is_configured": true, 00:09:52.408 "data_offset": 2048, 00:09:52.408 "data_size": 63488 00:09:52.408 } 00:09:52.408 ] 00:09:52.408 }' 00:09:52.408 03:18:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.408 03:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.666 03:18:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:52.667 03:18:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:52.924 [2024-11-21 03:18:40.299271] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006b10 00:09:53.861 03:18:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:53.861 03:18:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.861 03:18:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.861 03:18:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.861 03:18:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:53.861 03:18:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:53.861 03:18:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:53.861 03:18:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:53.861 03:18:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:53.861 03:18:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:53.861 03:18:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:53.861 03:18:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.861 03:18:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.861 03:18:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.861 03:18:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.861 03:18:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.861 03:18:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.861 03:18:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.861 03:18:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:53.861 03:18:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.861 03:18:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.861 03:18:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.861 03:18:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.861 "name": "raid_bdev1", 00:09:53.861 "uuid": "8961cbfe-f366-4669-a98d-88ab862038d1", 00:09:53.861 "strip_size_kb": 64, 00:09:53.861 "state": "online", 00:09:53.861 "raid_level": "concat", 00:09:53.861 "superblock": true, 00:09:53.861 "num_base_bdevs": 3, 00:09:53.861 "num_base_bdevs_discovered": 3, 00:09:53.861 "num_base_bdevs_operational": 3, 00:09:53.861 "base_bdevs_list": [ 00:09:53.861 { 00:09:53.861 "name": "BaseBdev1", 00:09:53.861 "uuid": "8a44799e-841b-5b0e-9e33-82e06e74c2f9", 00:09:53.861 "is_configured": true, 00:09:53.861 "data_offset": 2048, 00:09:53.861 "data_size": 63488 00:09:53.861 }, 00:09:53.861 { 00:09:53.861 "name": "BaseBdev2", 00:09:53.861 "uuid": "a5b237b4-72f0-5c3a-ba43-8e6c75d13d07", 00:09:53.861 "is_configured": true, 00:09:53.861 "data_offset": 2048, 00:09:53.861 "data_size": 63488 00:09:53.861 }, 00:09:53.861 { 00:09:53.861 "name": "BaseBdev3", 00:09:53.861 "uuid": "5ef90046-16e3-5498-83ca-abb960900a03", 00:09:53.861 "is_configured": true, 00:09:53.861 "data_offset": 2048, 00:09:53.861 "data_size": 63488 00:09:53.861 } 00:09:53.861 ] 00:09:53.861 }' 00:09:53.861 03:18:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.861 03:18:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.121 03:18:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:54.121 03:18:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.121 03:18:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.380 [2024-11-21 03:18:41.686758] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:54.380 [2024-11-21 03:18:41.686885] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:54.380 [2024-11-21 03:18:41.689786] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:54.380 [2024-11-21 03:18:41.689877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.380 [2024-11-21 03:18:41.689939] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:54.380 [2024-11-21 03:18:41.689984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:54.380 03:18:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.380 03:18:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80222 00:09:54.380 { 00:09:54.380 "results": [ 00:09:54.380 { 00:09:54.380 "job": "raid_bdev1", 00:09:54.380 "core_mask": "0x1", 00:09:54.380 "workload": "randrw", 00:09:54.380 "percentage": 50, 00:09:54.380 "status": "finished", 00:09:54.380 "queue_depth": 1, 00:09:54.380 "io_size": 131072, 00:09:54.380 "runtime": 1.385291, 00:09:54.380 "iops": 14098.120900229627, 00:09:54.380 "mibps": 1762.2651125287034, 00:09:54.380 "io_failed": 1, 00:09:54.380 "io_timeout": 0, 00:09:54.380 "avg_latency_us": 99.62019931322217, 00:09:54.380 "min_latency_us": 25.994944652662774, 00:09:54.380 "max_latency_us": 1413.7679769894535 00:09:54.380 } 00:09:54.380 ], 00:09:54.380 "core_count": 1 00:09:54.380 } 00:09:54.380 03:18:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 80222 ']' 00:09:54.380 03:18:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 80222 00:09:54.380 03:18:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:54.380 03:18:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:54.380 03:18:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80222 00:09:54.380 03:18:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:54.380 03:18:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:54.380 killing process with pid 80222 00:09:54.380 03:18:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80222' 00:09:54.380 03:18:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 80222 00:09:54.380 03:18:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 80222 00:09:54.380 [2024-11-21 03:18:41.733874] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:54.380 [2024-11-21 03:18:41.782729] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:54.640 03:18:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6gpIeC9eCw 00:09:54.640 03:18:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:54.640 03:18:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:54.640 03:18:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:54.640 03:18:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:54.640 03:18:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:54.640 03:18:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:54.640 03:18:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:54.640 ************************************ 00:09:54.640 END TEST raid_read_error_test 00:09:54.640 ************************************ 00:09:54.640 00:09:54.640 real 0m3.450s 00:09:54.640 user 0m4.232s 00:09:54.640 sys 0m0.666s 00:09:54.640 03:18:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.640 03:18:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.640 03:18:42 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:54.640 03:18:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:54.640 03:18:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.640 03:18:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:54.640 ************************************ 00:09:54.640 START TEST raid_write_error_test 00:09:54.640 ************************************ 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.mUHQovvRci 00:09:54.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80357 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80357 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 80357 ']' 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.640 03:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:54.899 [2024-11-21 03:18:42.290166] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:09:54.899 [2024-11-21 03:18:42.290793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80357 ] 00:09:54.899 [2024-11-21 03:18:42.429577] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:54.899 [2024-11-21 03:18:42.454418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.158 [2024-11-21 03:18:42.496592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.158 [2024-11-21 03:18:42.573417] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.158 [2024-11-21 03:18:42.573487] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.726 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:55.726 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:55.726 03:18:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:55.726 03:18:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:55.726 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.726 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.726 BaseBdev1_malloc 00:09:55.726 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.726 03:18:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:55.726 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.726 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.726 true 00:09:55.726 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.726 03:18:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:55.726 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.726 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.726 [2024-11-21 03:18:43.213643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:55.726 [2024-11-21 03:18:43.213803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.726 [2024-11-21 03:18:43.213828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:55.726 [2024-11-21 03:18:43.213843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.726 [2024-11-21 03:18:43.216342] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.726 [2024-11-21 03:18:43.216382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:55.726 BaseBdev1 00:09:55.726 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.726 03:18:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:55.726 03:18:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:55.726 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.726 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.726 BaseBdev2_malloc 00:09:55.726 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.726 03:18:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:55.726 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.726 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.726 true 00:09:55.726 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.726 03:18:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:55.726 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.726 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.726 [2024-11-21 03:18:43.260293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:55.726 [2024-11-21 03:18:43.260428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.726 [2024-11-21 03:18:43.260450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:55.726 [2024-11-21 03:18:43.260462] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.726 [2024-11-21 03:18:43.262892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.726 [2024-11-21 03:18:43.262929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:55.726 BaseBdev2 00:09:55.726 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.726 03:18:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:55.726 03:18:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:55.726 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.726 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.985 BaseBdev3_malloc 00:09:55.985 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.985 03:18:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:55.985 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.985 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.985 true 00:09:55.985 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.985 03:18:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:55.985 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.985 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.985 [2024-11-21 03:18:43.306929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:55.985 [2024-11-21 03:18:43.306989] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.985 [2024-11-21 03:18:43.307007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:55.985 [2024-11-21 03:18:43.307032] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.985 [2024-11-21 03:18:43.309566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.985 [2024-11-21 03:18:43.309605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:55.985 BaseBdev3 00:09:55.985 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.985 03:18:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:55.985 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.985 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.985 [2024-11-21 03:18:43.318980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:55.985 [2024-11-21 03:18:43.321209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:55.985 [2024-11-21 03:18:43.321279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:55.985 [2024-11-21 03:18:43.321459] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:55.985 [2024-11-21 03:18:43.321471] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:55.985 [2024-11-21 03:18:43.321735] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006970 00:09:55.985 [2024-11-21 03:18:43.321864] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:55.985 [2024-11-21 03:18:43.321881] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:55.985 [2024-11-21 03:18:43.321989] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.985 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.985 03:18:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:55.985 03:18:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:55.985 03:18:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.985 03:18:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:55.985 03:18:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.985 03:18:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.985 03:18:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.985 03:18:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.986 03:18:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.986 03:18:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.986 03:18:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.986 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.986 03:18:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.986 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.986 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.986 03:18:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.986 "name": "raid_bdev1", 00:09:55.986 "uuid": "ab420966-b3be-4abd-a63f-b1308fc8e27c", 00:09:55.986 "strip_size_kb": 64, 00:09:55.986 "state": "online", 00:09:55.986 "raid_level": "concat", 00:09:55.986 "superblock": true, 00:09:55.986 "num_base_bdevs": 3, 00:09:55.986 "num_base_bdevs_discovered": 3, 00:09:55.986 "num_base_bdevs_operational": 3, 00:09:55.986 "base_bdevs_list": [ 00:09:55.986 { 00:09:55.986 "name": "BaseBdev1", 00:09:55.986 "uuid": "7adcccbf-95fa-56f7-8d8c-8fae0b39a04a", 00:09:55.986 "is_configured": true, 00:09:55.986 "data_offset": 2048, 00:09:55.986 "data_size": 63488 00:09:55.986 }, 00:09:55.986 { 00:09:55.986 "name": "BaseBdev2", 00:09:55.986 "uuid": "43b888ab-00ae-5b65-9f9d-c539ea7c58b4", 00:09:55.986 "is_configured": true, 00:09:55.986 "data_offset": 2048, 00:09:55.986 "data_size": 63488 00:09:55.986 }, 00:09:55.986 { 00:09:55.986 "name": "BaseBdev3", 00:09:55.986 "uuid": "f24058cf-cd12-568e-ad3b-371482c08122", 00:09:55.986 "is_configured": true, 00:09:55.986 "data_offset": 2048, 00:09:55.986 "data_size": 63488 00:09:55.986 } 00:09:55.986 ] 00:09:55.986 }' 00:09:55.986 03:18:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.986 03:18:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.244 03:18:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:56.244 03:18:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:56.503 [2024-11-21 03:18:43.883681] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006b10 00:09:57.443 03:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:57.443 03:18:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.443 03:18:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.443 03:18:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.443 03:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:57.443 03:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:57.443 03:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:57.443 03:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:57.443 03:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:57.443 03:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.443 03:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:57.443 03:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.443 03:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.443 03:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.443 03:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.443 03:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.443 03:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.443 03:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.443 03:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.443 03:18:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.443 03:18:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.443 03:18:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.443 03:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.443 "name": "raid_bdev1", 00:09:57.443 "uuid": "ab420966-b3be-4abd-a63f-b1308fc8e27c", 00:09:57.443 "strip_size_kb": 64, 00:09:57.443 "state": "online", 00:09:57.443 "raid_level": "concat", 00:09:57.443 "superblock": true, 00:09:57.443 "num_base_bdevs": 3, 00:09:57.443 "num_base_bdevs_discovered": 3, 00:09:57.443 "num_base_bdevs_operational": 3, 00:09:57.443 "base_bdevs_list": [ 00:09:57.443 { 00:09:57.443 "name": "BaseBdev1", 00:09:57.443 "uuid": "7adcccbf-95fa-56f7-8d8c-8fae0b39a04a", 00:09:57.443 "is_configured": true, 00:09:57.443 "data_offset": 2048, 00:09:57.443 "data_size": 63488 00:09:57.443 }, 00:09:57.443 { 00:09:57.443 "name": "BaseBdev2", 00:09:57.443 "uuid": "43b888ab-00ae-5b65-9f9d-c539ea7c58b4", 00:09:57.443 "is_configured": true, 00:09:57.443 "data_offset": 2048, 00:09:57.443 "data_size": 63488 00:09:57.443 }, 00:09:57.443 { 00:09:57.443 "name": "BaseBdev3", 00:09:57.443 "uuid": "f24058cf-cd12-568e-ad3b-371482c08122", 00:09:57.443 "is_configured": true, 00:09:57.443 "data_offset": 2048, 00:09:57.443 "data_size": 63488 00:09:57.443 } 00:09:57.443 ] 00:09:57.443 }' 00:09:57.443 03:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.443 03:18:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.041 03:18:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:58.041 03:18:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.041 03:18:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.041 [2024-11-21 03:18:45.284102] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:58.041 [2024-11-21 03:18:45.284233] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:58.041 [2024-11-21 03:18:45.286874] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:58.041 [2024-11-21 03:18:45.286990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.041 [2024-11-21 03:18:45.287065] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:58.041 [2024-11-21 03:18:45.287130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:58.041 { 00:09:58.041 "results": [ 00:09:58.041 { 00:09:58.041 "job": "raid_bdev1", 00:09:58.041 "core_mask": "0x1", 00:09:58.041 "workload": "randrw", 00:09:58.041 "percentage": 50, 00:09:58.041 "status": "finished", 00:09:58.041 "queue_depth": 1, 00:09:58.041 "io_size": 131072, 00:09:58.041 "runtime": 1.398181, 00:09:58.041 "iops": 14442.33614961153, 00:09:58.041 "mibps": 1805.2920187014413, 00:09:58.041 "io_failed": 1, 00:09:58.041 "io_timeout": 0, 00:09:58.041 "avg_latency_us": 97.17488431593824, 00:09:58.041 "min_latency_us": 26.329643510851565, 00:09:58.041 "max_latency_us": 1435.188703913536 00:09:58.041 } 00:09:58.041 ], 00:09:58.041 "core_count": 1 00:09:58.041 } 00:09:58.041 03:18:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.041 03:18:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80357 00:09:58.041 03:18:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 80357 ']' 00:09:58.041 03:18:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 80357 00:09:58.041 03:18:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:58.041 03:18:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.041 03:18:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80357 00:09:58.041 03:18:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:58.041 03:18:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:58.041 03:18:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80357' 00:09:58.041 killing process with pid 80357 00:09:58.041 03:18:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 80357 00:09:58.041 [2024-11-21 03:18:45.340123] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:58.041 03:18:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 80357 00:09:58.041 [2024-11-21 03:18:45.388376] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:58.312 03:18:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.mUHQovvRci 00:09:58.312 03:18:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:58.312 03:18:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:58.312 03:18:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:58.312 03:18:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:58.312 03:18:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:58.312 03:18:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:58.312 ************************************ 00:09:58.312 END TEST raid_write_error_test 00:09:58.312 ************************************ 00:09:58.312 03:18:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:58.312 00:09:58.312 real 0m3.550s 00:09:58.312 user 0m4.452s 00:09:58.312 sys 0m0.647s 00:09:58.312 03:18:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.312 03:18:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.312 03:18:45 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:58.312 03:18:45 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:58.312 03:18:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:58.312 03:18:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.312 03:18:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:58.312 ************************************ 00:09:58.312 START TEST raid_state_function_test 00:09:58.312 ************************************ 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:58.312 Process raid pid: 80484 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80484 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80484' 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80484 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80484 ']' 00:09:58.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.312 03:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.571 [2024-11-21 03:18:45.905604] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:09:58.571 [2024-11-21 03:18:45.905758] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.571 [2024-11-21 03:18:46.047945] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:58.571 [2024-11-21 03:18:46.073340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.571 [2024-11-21 03:18:46.114034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.829 [2024-11-21 03:18:46.191676] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.829 [2024-11-21 03:18:46.191828] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:59.397 03:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.397 03:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:59.397 03:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:59.397 03:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.397 03:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.397 [2024-11-21 03:18:46.752311] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:59.397 [2024-11-21 03:18:46.752378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:59.397 [2024-11-21 03:18:46.752392] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:59.397 [2024-11-21 03:18:46.752400] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:59.397 [2024-11-21 03:18:46.752412] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:59.397 [2024-11-21 03:18:46.752420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:59.397 03:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.397 03:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:59.397 03:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.397 03:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.397 03:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.397 03:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.397 03:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.397 03:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.398 03:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.398 03:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.398 03:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.398 03:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.398 03:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.398 03:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.398 03:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.398 03:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.398 03:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.398 "name": "Existed_Raid", 00:09:59.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.398 "strip_size_kb": 0, 00:09:59.398 "state": "configuring", 00:09:59.398 "raid_level": "raid1", 00:09:59.398 "superblock": false, 00:09:59.398 "num_base_bdevs": 3, 00:09:59.398 "num_base_bdevs_discovered": 0, 00:09:59.398 "num_base_bdevs_operational": 3, 00:09:59.398 "base_bdevs_list": [ 00:09:59.398 { 00:09:59.398 "name": "BaseBdev1", 00:09:59.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.398 "is_configured": false, 00:09:59.398 "data_offset": 0, 00:09:59.398 "data_size": 0 00:09:59.398 }, 00:09:59.398 { 00:09:59.398 "name": "BaseBdev2", 00:09:59.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.398 "is_configured": false, 00:09:59.398 "data_offset": 0, 00:09:59.398 "data_size": 0 00:09:59.398 }, 00:09:59.398 { 00:09:59.398 "name": "BaseBdev3", 00:09:59.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.398 "is_configured": false, 00:09:59.398 "data_offset": 0, 00:09:59.398 "data_size": 0 00:09:59.398 } 00:09:59.398 ] 00:09:59.398 }' 00:09:59.398 03:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.398 03:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.657 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:59.657 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.657 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.657 [2024-11-21 03:18:47.220357] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:59.657 [2024-11-21 03:18:47.220485] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:09:59.916 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.916 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:59.916 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.916 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.916 [2024-11-21 03:18:47.232367] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:59.916 [2024-11-21 03:18:47.232457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:59.916 [2024-11-21 03:18:47.232490] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:59.916 [2024-11-21 03:18:47.232512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:59.916 [2024-11-21 03:18:47.232535] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:59.916 [2024-11-21 03:18:47.232555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:59.916 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.916 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:59.916 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.916 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.916 [2024-11-21 03:18:47.259565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.916 BaseBdev1 00:09:59.916 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.916 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:59.916 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:59.916 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:59.916 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:59.916 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:59.916 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:59.916 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:59.916 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.916 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.916 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.916 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:59.916 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.916 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.916 [ 00:09:59.916 { 00:09:59.916 "name": "BaseBdev1", 00:09:59.916 "aliases": [ 00:09:59.916 "74bd3f45-d386-420e-85e1-d634439a90b8" 00:09:59.916 ], 00:09:59.916 "product_name": "Malloc disk", 00:09:59.916 "block_size": 512, 00:09:59.916 "num_blocks": 65536, 00:09:59.916 "uuid": "74bd3f45-d386-420e-85e1-d634439a90b8", 00:09:59.916 "assigned_rate_limits": { 00:09:59.916 "rw_ios_per_sec": 0, 00:09:59.916 "rw_mbytes_per_sec": 0, 00:09:59.916 "r_mbytes_per_sec": 0, 00:09:59.916 "w_mbytes_per_sec": 0 00:09:59.916 }, 00:09:59.916 "claimed": true, 00:09:59.916 "claim_type": "exclusive_write", 00:09:59.916 "zoned": false, 00:09:59.916 "supported_io_types": { 00:09:59.916 "read": true, 00:09:59.916 "write": true, 00:09:59.916 "unmap": true, 00:09:59.916 "flush": true, 00:09:59.916 "reset": true, 00:09:59.916 "nvme_admin": false, 00:09:59.916 "nvme_io": false, 00:09:59.917 "nvme_io_md": false, 00:09:59.917 "write_zeroes": true, 00:09:59.917 "zcopy": true, 00:09:59.917 "get_zone_info": false, 00:09:59.917 "zone_management": false, 00:09:59.917 "zone_append": false, 00:09:59.917 "compare": false, 00:09:59.917 "compare_and_write": false, 00:09:59.917 "abort": true, 00:09:59.917 "seek_hole": false, 00:09:59.917 "seek_data": false, 00:09:59.917 "copy": true, 00:09:59.917 "nvme_iov_md": false 00:09:59.917 }, 00:09:59.917 "memory_domains": [ 00:09:59.917 { 00:09:59.917 "dma_device_id": "system", 00:09:59.917 "dma_device_type": 1 00:09:59.917 }, 00:09:59.917 { 00:09:59.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.917 "dma_device_type": 2 00:09:59.917 } 00:09:59.917 ], 00:09:59.917 "driver_specific": {} 00:09:59.917 } 00:09:59.917 ] 00:09:59.917 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.917 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:59.917 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:59.917 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.917 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.917 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.917 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.917 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.917 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.917 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.917 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.917 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.917 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.917 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.917 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.917 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.917 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.917 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.917 "name": "Existed_Raid", 00:09:59.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.917 "strip_size_kb": 0, 00:09:59.917 "state": "configuring", 00:09:59.917 "raid_level": "raid1", 00:09:59.917 "superblock": false, 00:09:59.917 "num_base_bdevs": 3, 00:09:59.917 "num_base_bdevs_discovered": 1, 00:09:59.917 "num_base_bdevs_operational": 3, 00:09:59.917 "base_bdevs_list": [ 00:09:59.917 { 00:09:59.917 "name": "BaseBdev1", 00:09:59.917 "uuid": "74bd3f45-d386-420e-85e1-d634439a90b8", 00:09:59.917 "is_configured": true, 00:09:59.917 "data_offset": 0, 00:09:59.917 "data_size": 65536 00:09:59.917 }, 00:09:59.917 { 00:09:59.917 "name": "BaseBdev2", 00:09:59.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.917 "is_configured": false, 00:09:59.917 "data_offset": 0, 00:09:59.917 "data_size": 0 00:09:59.917 }, 00:09:59.917 { 00:09:59.917 "name": "BaseBdev3", 00:09:59.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.917 "is_configured": false, 00:09:59.917 "data_offset": 0, 00:09:59.917 "data_size": 0 00:09:59.917 } 00:09:59.917 ] 00:09:59.917 }' 00:09:59.917 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.917 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.483 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:00.483 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.483 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.483 [2024-11-21 03:18:47.783789] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:00.483 [2024-11-21 03:18:47.783884] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:00.483 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.483 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:00.483 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.483 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.483 [2024-11-21 03:18:47.791789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:00.483 [2024-11-21 03:18:47.793955] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:00.483 [2024-11-21 03:18:47.793993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:00.483 [2024-11-21 03:18:47.794007] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:00.483 [2024-11-21 03:18:47.794032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:00.483 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.483 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:00.483 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:00.483 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:00.483 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.483 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.483 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.483 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.483 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.483 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.483 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.483 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.483 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.483 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.483 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.483 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.483 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.483 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.483 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.483 "name": "Existed_Raid", 00:10:00.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.483 "strip_size_kb": 0, 00:10:00.483 "state": "configuring", 00:10:00.483 "raid_level": "raid1", 00:10:00.483 "superblock": false, 00:10:00.483 "num_base_bdevs": 3, 00:10:00.483 "num_base_bdevs_discovered": 1, 00:10:00.483 "num_base_bdevs_operational": 3, 00:10:00.483 "base_bdevs_list": [ 00:10:00.483 { 00:10:00.483 "name": "BaseBdev1", 00:10:00.484 "uuid": "74bd3f45-d386-420e-85e1-d634439a90b8", 00:10:00.484 "is_configured": true, 00:10:00.484 "data_offset": 0, 00:10:00.484 "data_size": 65536 00:10:00.484 }, 00:10:00.484 { 00:10:00.484 "name": "BaseBdev2", 00:10:00.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.484 "is_configured": false, 00:10:00.484 "data_offset": 0, 00:10:00.484 "data_size": 0 00:10:00.484 }, 00:10:00.484 { 00:10:00.484 "name": "BaseBdev3", 00:10:00.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.484 "is_configured": false, 00:10:00.484 "data_offset": 0, 00:10:00.484 "data_size": 0 00:10:00.484 } 00:10:00.484 ] 00:10:00.484 }' 00:10:00.484 03:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.484 03:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.743 [2024-11-21 03:18:48.216925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:00.743 BaseBdev2 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.743 [ 00:10:00.743 { 00:10:00.743 "name": "BaseBdev2", 00:10:00.743 "aliases": [ 00:10:00.743 "da73d0b3-ecef-4c8f-afdc-50f686f7504a" 00:10:00.743 ], 00:10:00.743 "product_name": "Malloc disk", 00:10:00.743 "block_size": 512, 00:10:00.743 "num_blocks": 65536, 00:10:00.743 "uuid": "da73d0b3-ecef-4c8f-afdc-50f686f7504a", 00:10:00.743 "assigned_rate_limits": { 00:10:00.743 "rw_ios_per_sec": 0, 00:10:00.743 "rw_mbytes_per_sec": 0, 00:10:00.743 "r_mbytes_per_sec": 0, 00:10:00.743 "w_mbytes_per_sec": 0 00:10:00.743 }, 00:10:00.743 "claimed": true, 00:10:00.743 "claim_type": "exclusive_write", 00:10:00.743 "zoned": false, 00:10:00.743 "supported_io_types": { 00:10:00.743 "read": true, 00:10:00.743 "write": true, 00:10:00.743 "unmap": true, 00:10:00.743 "flush": true, 00:10:00.743 "reset": true, 00:10:00.743 "nvme_admin": false, 00:10:00.743 "nvme_io": false, 00:10:00.743 "nvme_io_md": false, 00:10:00.743 "write_zeroes": true, 00:10:00.743 "zcopy": true, 00:10:00.743 "get_zone_info": false, 00:10:00.743 "zone_management": false, 00:10:00.743 "zone_append": false, 00:10:00.743 "compare": false, 00:10:00.743 "compare_and_write": false, 00:10:00.743 "abort": true, 00:10:00.743 "seek_hole": false, 00:10:00.743 "seek_data": false, 00:10:00.743 "copy": true, 00:10:00.743 "nvme_iov_md": false 00:10:00.743 }, 00:10:00.743 "memory_domains": [ 00:10:00.743 { 00:10:00.743 "dma_device_id": "system", 00:10:00.743 "dma_device_type": 1 00:10:00.743 }, 00:10:00.743 { 00:10:00.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.743 "dma_device_type": 2 00:10:00.743 } 00:10:00.743 ], 00:10:00.743 "driver_specific": {} 00:10:00.743 } 00:10:00.743 ] 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.743 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.002 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.002 "name": "Existed_Raid", 00:10:01.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.002 "strip_size_kb": 0, 00:10:01.002 "state": "configuring", 00:10:01.002 "raid_level": "raid1", 00:10:01.002 "superblock": false, 00:10:01.002 "num_base_bdevs": 3, 00:10:01.002 "num_base_bdevs_discovered": 2, 00:10:01.002 "num_base_bdevs_operational": 3, 00:10:01.002 "base_bdevs_list": [ 00:10:01.002 { 00:10:01.002 "name": "BaseBdev1", 00:10:01.002 "uuid": "74bd3f45-d386-420e-85e1-d634439a90b8", 00:10:01.002 "is_configured": true, 00:10:01.002 "data_offset": 0, 00:10:01.002 "data_size": 65536 00:10:01.002 }, 00:10:01.002 { 00:10:01.002 "name": "BaseBdev2", 00:10:01.002 "uuid": "da73d0b3-ecef-4c8f-afdc-50f686f7504a", 00:10:01.002 "is_configured": true, 00:10:01.002 "data_offset": 0, 00:10:01.002 "data_size": 65536 00:10:01.002 }, 00:10:01.002 { 00:10:01.002 "name": "BaseBdev3", 00:10:01.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.002 "is_configured": false, 00:10:01.002 "data_offset": 0, 00:10:01.002 "data_size": 0 00:10:01.002 } 00:10:01.002 ] 00:10:01.002 }' 00:10:01.002 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.002 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.261 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:01.261 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.261 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.261 [2024-11-21 03:18:48.723372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:01.261 [2024-11-21 03:18:48.723570] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:01.261 [2024-11-21 03:18:48.723607] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:01.261 [2024-11-21 03:18:48.724063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:01.261 [2024-11-21 03:18:48.724311] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:01.261 [2024-11-21 03:18:48.724377] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:10:01.262 [2024-11-21 03:18:48.724694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:01.262 BaseBdev3 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.262 [ 00:10:01.262 { 00:10:01.262 "name": "BaseBdev3", 00:10:01.262 "aliases": [ 00:10:01.262 "7b91355e-bab0-412a-b315-3c989e84e87f" 00:10:01.262 ], 00:10:01.262 "product_name": "Malloc disk", 00:10:01.262 "block_size": 512, 00:10:01.262 "num_blocks": 65536, 00:10:01.262 "uuid": "7b91355e-bab0-412a-b315-3c989e84e87f", 00:10:01.262 "assigned_rate_limits": { 00:10:01.262 "rw_ios_per_sec": 0, 00:10:01.262 "rw_mbytes_per_sec": 0, 00:10:01.262 "r_mbytes_per_sec": 0, 00:10:01.262 "w_mbytes_per_sec": 0 00:10:01.262 }, 00:10:01.262 "claimed": true, 00:10:01.262 "claim_type": "exclusive_write", 00:10:01.262 "zoned": false, 00:10:01.262 "supported_io_types": { 00:10:01.262 "read": true, 00:10:01.262 "write": true, 00:10:01.262 "unmap": true, 00:10:01.262 "flush": true, 00:10:01.262 "reset": true, 00:10:01.262 "nvme_admin": false, 00:10:01.262 "nvme_io": false, 00:10:01.262 "nvme_io_md": false, 00:10:01.262 "write_zeroes": true, 00:10:01.262 "zcopy": true, 00:10:01.262 "get_zone_info": false, 00:10:01.262 "zone_management": false, 00:10:01.262 "zone_append": false, 00:10:01.262 "compare": false, 00:10:01.262 "compare_and_write": false, 00:10:01.262 "abort": true, 00:10:01.262 "seek_hole": false, 00:10:01.262 "seek_data": false, 00:10:01.262 "copy": true, 00:10:01.262 "nvme_iov_md": false 00:10:01.262 }, 00:10:01.262 "memory_domains": [ 00:10:01.262 { 00:10:01.262 "dma_device_id": "system", 00:10:01.262 "dma_device_type": 1 00:10:01.262 }, 00:10:01.262 { 00:10:01.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.262 "dma_device_type": 2 00:10:01.262 } 00:10:01.262 ], 00:10:01.262 "driver_specific": {} 00:10:01.262 } 00:10:01.262 ] 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.262 "name": "Existed_Raid", 00:10:01.262 "uuid": "14f924f5-5628-4782-9459-b79a3c7ed122", 00:10:01.262 "strip_size_kb": 0, 00:10:01.262 "state": "online", 00:10:01.262 "raid_level": "raid1", 00:10:01.262 "superblock": false, 00:10:01.262 "num_base_bdevs": 3, 00:10:01.262 "num_base_bdevs_discovered": 3, 00:10:01.262 "num_base_bdevs_operational": 3, 00:10:01.262 "base_bdevs_list": [ 00:10:01.262 { 00:10:01.262 "name": "BaseBdev1", 00:10:01.262 "uuid": "74bd3f45-d386-420e-85e1-d634439a90b8", 00:10:01.262 "is_configured": true, 00:10:01.262 "data_offset": 0, 00:10:01.262 "data_size": 65536 00:10:01.262 }, 00:10:01.262 { 00:10:01.262 "name": "BaseBdev2", 00:10:01.262 "uuid": "da73d0b3-ecef-4c8f-afdc-50f686f7504a", 00:10:01.262 "is_configured": true, 00:10:01.262 "data_offset": 0, 00:10:01.262 "data_size": 65536 00:10:01.262 }, 00:10:01.262 { 00:10:01.262 "name": "BaseBdev3", 00:10:01.262 "uuid": "7b91355e-bab0-412a-b315-3c989e84e87f", 00:10:01.262 "is_configured": true, 00:10:01.262 "data_offset": 0, 00:10:01.262 "data_size": 65536 00:10:01.262 } 00:10:01.262 ] 00:10:01.262 }' 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.262 03:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.831 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:01.831 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:01.831 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:01.831 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:01.831 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:01.831 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:01.831 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:01.831 03:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.831 03:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.831 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:01.831 [2024-11-21 03:18:49.175911] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.831 03:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.831 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:01.831 "name": "Existed_Raid", 00:10:01.831 "aliases": [ 00:10:01.831 "14f924f5-5628-4782-9459-b79a3c7ed122" 00:10:01.831 ], 00:10:01.831 "product_name": "Raid Volume", 00:10:01.831 "block_size": 512, 00:10:01.831 "num_blocks": 65536, 00:10:01.831 "uuid": "14f924f5-5628-4782-9459-b79a3c7ed122", 00:10:01.832 "assigned_rate_limits": { 00:10:01.832 "rw_ios_per_sec": 0, 00:10:01.832 "rw_mbytes_per_sec": 0, 00:10:01.832 "r_mbytes_per_sec": 0, 00:10:01.832 "w_mbytes_per_sec": 0 00:10:01.832 }, 00:10:01.832 "claimed": false, 00:10:01.832 "zoned": false, 00:10:01.832 "supported_io_types": { 00:10:01.832 "read": true, 00:10:01.832 "write": true, 00:10:01.832 "unmap": false, 00:10:01.832 "flush": false, 00:10:01.832 "reset": true, 00:10:01.832 "nvme_admin": false, 00:10:01.832 "nvme_io": false, 00:10:01.832 "nvme_io_md": false, 00:10:01.832 "write_zeroes": true, 00:10:01.832 "zcopy": false, 00:10:01.832 "get_zone_info": false, 00:10:01.832 "zone_management": false, 00:10:01.832 "zone_append": false, 00:10:01.832 "compare": false, 00:10:01.832 "compare_and_write": false, 00:10:01.832 "abort": false, 00:10:01.832 "seek_hole": false, 00:10:01.832 "seek_data": false, 00:10:01.832 "copy": false, 00:10:01.832 "nvme_iov_md": false 00:10:01.832 }, 00:10:01.832 "memory_domains": [ 00:10:01.832 { 00:10:01.832 "dma_device_id": "system", 00:10:01.832 "dma_device_type": 1 00:10:01.832 }, 00:10:01.832 { 00:10:01.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.832 "dma_device_type": 2 00:10:01.832 }, 00:10:01.832 { 00:10:01.832 "dma_device_id": "system", 00:10:01.832 "dma_device_type": 1 00:10:01.832 }, 00:10:01.832 { 00:10:01.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.832 "dma_device_type": 2 00:10:01.832 }, 00:10:01.832 { 00:10:01.832 "dma_device_id": "system", 00:10:01.832 "dma_device_type": 1 00:10:01.832 }, 00:10:01.832 { 00:10:01.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.832 "dma_device_type": 2 00:10:01.832 } 00:10:01.832 ], 00:10:01.832 "driver_specific": { 00:10:01.832 "raid": { 00:10:01.832 "uuid": "14f924f5-5628-4782-9459-b79a3c7ed122", 00:10:01.832 "strip_size_kb": 0, 00:10:01.832 "state": "online", 00:10:01.832 "raid_level": "raid1", 00:10:01.832 "superblock": false, 00:10:01.832 "num_base_bdevs": 3, 00:10:01.832 "num_base_bdevs_discovered": 3, 00:10:01.832 "num_base_bdevs_operational": 3, 00:10:01.832 "base_bdevs_list": [ 00:10:01.832 { 00:10:01.832 "name": "BaseBdev1", 00:10:01.832 "uuid": "74bd3f45-d386-420e-85e1-d634439a90b8", 00:10:01.832 "is_configured": true, 00:10:01.832 "data_offset": 0, 00:10:01.832 "data_size": 65536 00:10:01.832 }, 00:10:01.832 { 00:10:01.832 "name": "BaseBdev2", 00:10:01.832 "uuid": "da73d0b3-ecef-4c8f-afdc-50f686f7504a", 00:10:01.832 "is_configured": true, 00:10:01.832 "data_offset": 0, 00:10:01.832 "data_size": 65536 00:10:01.832 }, 00:10:01.832 { 00:10:01.832 "name": "BaseBdev3", 00:10:01.832 "uuid": "7b91355e-bab0-412a-b315-3c989e84e87f", 00:10:01.832 "is_configured": true, 00:10:01.832 "data_offset": 0, 00:10:01.832 "data_size": 65536 00:10:01.832 } 00:10:01.832 ] 00:10:01.832 } 00:10:01.832 } 00:10:01.832 }' 00:10:01.832 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:01.832 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:01.832 BaseBdev2 00:10:01.832 BaseBdev3' 00:10:01.832 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.832 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:01.832 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.832 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:01.832 03:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.832 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.832 03:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.832 03:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.832 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.832 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.832 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.832 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:01.832 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.832 03:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.832 03:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.832 03:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.091 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.091 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.091 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.091 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:02.091 03:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.091 03:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.091 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.091 03:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.091 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.091 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.091 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:02.091 03:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.091 03:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.091 [2024-11-21 03:18:49.455695] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:02.091 03:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.091 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:02.091 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:02.091 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:02.091 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:02.091 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:02.091 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:02.092 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.092 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:02.092 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.092 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.092 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:02.092 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.092 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.092 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.092 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.092 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.092 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.092 03:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.092 03:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.092 03:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.092 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.092 "name": "Existed_Raid", 00:10:02.092 "uuid": "14f924f5-5628-4782-9459-b79a3c7ed122", 00:10:02.092 "strip_size_kb": 0, 00:10:02.092 "state": "online", 00:10:02.092 "raid_level": "raid1", 00:10:02.092 "superblock": false, 00:10:02.092 "num_base_bdevs": 3, 00:10:02.092 "num_base_bdevs_discovered": 2, 00:10:02.092 "num_base_bdevs_operational": 2, 00:10:02.092 "base_bdevs_list": [ 00:10:02.092 { 00:10:02.092 "name": null, 00:10:02.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.092 "is_configured": false, 00:10:02.092 "data_offset": 0, 00:10:02.092 "data_size": 65536 00:10:02.092 }, 00:10:02.092 { 00:10:02.092 "name": "BaseBdev2", 00:10:02.092 "uuid": "da73d0b3-ecef-4c8f-afdc-50f686f7504a", 00:10:02.092 "is_configured": true, 00:10:02.092 "data_offset": 0, 00:10:02.092 "data_size": 65536 00:10:02.092 }, 00:10:02.092 { 00:10:02.092 "name": "BaseBdev3", 00:10:02.092 "uuid": "7b91355e-bab0-412a-b315-3c989e84e87f", 00:10:02.092 "is_configured": true, 00:10:02.092 "data_offset": 0, 00:10:02.092 "data_size": 65536 00:10:02.092 } 00:10:02.092 ] 00:10:02.092 }' 00:10:02.092 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.092 03:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.351 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:02.351 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.610 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.610 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:02.610 03:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.610 03:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.610 03:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.610 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:02.610 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:02.610 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:02.610 03:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.610 03:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.610 [2024-11-21 03:18:49.948388] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:02.610 03:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.610 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:02.610 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.610 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.610 03:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.610 03:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.610 03:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:02.610 03:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.610 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:02.610 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:02.610 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:02.610 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.610 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.610 [2024-11-21 03:18:50.028985] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:02.610 [2024-11-21 03:18:50.029171] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:02.610 [2024-11-21 03:18:50.049889] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.610 [2024-11-21 03:18:50.050062] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:02.610 [2024-11-21 03:18:50.050112] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:10:02.610 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.611 BaseBdev2 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.611 [ 00:10:02.611 { 00:10:02.611 "name": "BaseBdev2", 00:10:02.611 "aliases": [ 00:10:02.611 "57c00277-38d3-4a54-9472-34daf7f064e3" 00:10:02.611 ], 00:10:02.611 "product_name": "Malloc disk", 00:10:02.611 "block_size": 512, 00:10:02.611 "num_blocks": 65536, 00:10:02.611 "uuid": "57c00277-38d3-4a54-9472-34daf7f064e3", 00:10:02.611 "assigned_rate_limits": { 00:10:02.611 "rw_ios_per_sec": 0, 00:10:02.611 "rw_mbytes_per_sec": 0, 00:10:02.611 "r_mbytes_per_sec": 0, 00:10:02.611 "w_mbytes_per_sec": 0 00:10:02.611 }, 00:10:02.611 "claimed": false, 00:10:02.611 "zoned": false, 00:10:02.611 "supported_io_types": { 00:10:02.611 "read": true, 00:10:02.611 "write": true, 00:10:02.611 "unmap": true, 00:10:02.611 "flush": true, 00:10:02.611 "reset": true, 00:10:02.611 "nvme_admin": false, 00:10:02.611 "nvme_io": false, 00:10:02.611 "nvme_io_md": false, 00:10:02.611 "write_zeroes": true, 00:10:02.611 "zcopy": true, 00:10:02.611 "get_zone_info": false, 00:10:02.611 "zone_management": false, 00:10:02.611 "zone_append": false, 00:10:02.611 "compare": false, 00:10:02.611 "compare_and_write": false, 00:10:02.611 "abort": true, 00:10:02.611 "seek_hole": false, 00:10:02.611 "seek_data": false, 00:10:02.611 "copy": true, 00:10:02.611 "nvme_iov_md": false 00:10:02.611 }, 00:10:02.611 "memory_domains": [ 00:10:02.611 { 00:10:02.611 "dma_device_id": "system", 00:10:02.611 "dma_device_type": 1 00:10:02.611 }, 00:10:02.611 { 00:10:02.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.611 "dma_device_type": 2 00:10:02.611 } 00:10:02.611 ], 00:10:02.611 "driver_specific": {} 00:10:02.611 } 00:10:02.611 ] 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.611 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.871 BaseBdev3 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.871 [ 00:10:02.871 { 00:10:02.871 "name": "BaseBdev3", 00:10:02.871 "aliases": [ 00:10:02.871 "db604be7-352b-42da-b883-12fe48408054" 00:10:02.871 ], 00:10:02.871 "product_name": "Malloc disk", 00:10:02.871 "block_size": 512, 00:10:02.871 "num_blocks": 65536, 00:10:02.871 "uuid": "db604be7-352b-42da-b883-12fe48408054", 00:10:02.871 "assigned_rate_limits": { 00:10:02.871 "rw_ios_per_sec": 0, 00:10:02.871 "rw_mbytes_per_sec": 0, 00:10:02.871 "r_mbytes_per_sec": 0, 00:10:02.871 "w_mbytes_per_sec": 0 00:10:02.871 }, 00:10:02.871 "claimed": false, 00:10:02.871 "zoned": false, 00:10:02.871 "supported_io_types": { 00:10:02.871 "read": true, 00:10:02.871 "write": true, 00:10:02.871 "unmap": true, 00:10:02.871 "flush": true, 00:10:02.871 "reset": true, 00:10:02.871 "nvme_admin": false, 00:10:02.871 "nvme_io": false, 00:10:02.871 "nvme_io_md": false, 00:10:02.871 "write_zeroes": true, 00:10:02.871 "zcopy": true, 00:10:02.871 "get_zone_info": false, 00:10:02.871 "zone_management": false, 00:10:02.871 "zone_append": false, 00:10:02.871 "compare": false, 00:10:02.871 "compare_and_write": false, 00:10:02.871 "abort": true, 00:10:02.871 "seek_hole": false, 00:10:02.871 "seek_data": false, 00:10:02.871 "copy": true, 00:10:02.871 "nvme_iov_md": false 00:10:02.871 }, 00:10:02.871 "memory_domains": [ 00:10:02.871 { 00:10:02.871 "dma_device_id": "system", 00:10:02.871 "dma_device_type": 1 00:10:02.871 }, 00:10:02.871 { 00:10:02.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.871 "dma_device_type": 2 00:10:02.871 } 00:10:02.871 ], 00:10:02.871 "driver_specific": {} 00:10:02.871 } 00:10:02.871 ] 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.871 [2024-11-21 03:18:50.232710] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:02.871 [2024-11-21 03:18:50.232847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:02.871 [2024-11-21 03:18:50.232891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:02.871 [2024-11-21 03:18:50.235169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.871 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.871 "name": "Existed_Raid", 00:10:02.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.871 "strip_size_kb": 0, 00:10:02.871 "state": "configuring", 00:10:02.871 "raid_level": "raid1", 00:10:02.871 "superblock": false, 00:10:02.871 "num_base_bdevs": 3, 00:10:02.871 "num_base_bdevs_discovered": 2, 00:10:02.871 "num_base_bdevs_operational": 3, 00:10:02.871 "base_bdevs_list": [ 00:10:02.872 { 00:10:02.872 "name": "BaseBdev1", 00:10:02.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.872 "is_configured": false, 00:10:02.872 "data_offset": 0, 00:10:02.872 "data_size": 0 00:10:02.872 }, 00:10:02.872 { 00:10:02.872 "name": "BaseBdev2", 00:10:02.872 "uuid": "57c00277-38d3-4a54-9472-34daf7f064e3", 00:10:02.872 "is_configured": true, 00:10:02.872 "data_offset": 0, 00:10:02.872 "data_size": 65536 00:10:02.872 }, 00:10:02.872 { 00:10:02.872 "name": "BaseBdev3", 00:10:02.872 "uuid": "db604be7-352b-42da-b883-12fe48408054", 00:10:02.872 "is_configured": true, 00:10:02.872 "data_offset": 0, 00:10:02.872 "data_size": 65536 00:10:02.872 } 00:10:02.872 ] 00:10:02.872 }' 00:10:02.872 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.872 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.440 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:03.440 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.440 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.440 [2024-11-21 03:18:50.704898] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:03.440 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.440 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:03.440 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.440 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.440 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.440 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.440 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.440 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.440 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.440 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.440 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.440 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.440 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.440 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.440 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.440 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.440 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.440 "name": "Existed_Raid", 00:10:03.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.440 "strip_size_kb": 0, 00:10:03.440 "state": "configuring", 00:10:03.440 "raid_level": "raid1", 00:10:03.440 "superblock": false, 00:10:03.440 "num_base_bdevs": 3, 00:10:03.440 "num_base_bdevs_discovered": 1, 00:10:03.440 "num_base_bdevs_operational": 3, 00:10:03.440 "base_bdevs_list": [ 00:10:03.440 { 00:10:03.440 "name": "BaseBdev1", 00:10:03.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.440 "is_configured": false, 00:10:03.440 "data_offset": 0, 00:10:03.440 "data_size": 0 00:10:03.440 }, 00:10:03.440 { 00:10:03.440 "name": null, 00:10:03.440 "uuid": "57c00277-38d3-4a54-9472-34daf7f064e3", 00:10:03.440 "is_configured": false, 00:10:03.440 "data_offset": 0, 00:10:03.440 "data_size": 65536 00:10:03.440 }, 00:10:03.440 { 00:10:03.440 "name": "BaseBdev3", 00:10:03.440 "uuid": "db604be7-352b-42da-b883-12fe48408054", 00:10:03.440 "is_configured": true, 00:10:03.440 "data_offset": 0, 00:10:03.440 "data_size": 65536 00:10:03.440 } 00:10:03.440 ] 00:10:03.440 }' 00:10:03.440 03:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.440 03:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.721 [2024-11-21 03:18:51.213770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:03.721 BaseBdev1 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.721 [ 00:10:03.721 { 00:10:03.721 "name": "BaseBdev1", 00:10:03.721 "aliases": [ 00:10:03.721 "c01756db-07f3-4e60-93d2-b736904b1623" 00:10:03.721 ], 00:10:03.721 "product_name": "Malloc disk", 00:10:03.721 "block_size": 512, 00:10:03.721 "num_blocks": 65536, 00:10:03.721 "uuid": "c01756db-07f3-4e60-93d2-b736904b1623", 00:10:03.721 "assigned_rate_limits": { 00:10:03.721 "rw_ios_per_sec": 0, 00:10:03.721 "rw_mbytes_per_sec": 0, 00:10:03.721 "r_mbytes_per_sec": 0, 00:10:03.721 "w_mbytes_per_sec": 0 00:10:03.721 }, 00:10:03.721 "claimed": true, 00:10:03.721 "claim_type": "exclusive_write", 00:10:03.721 "zoned": false, 00:10:03.721 "supported_io_types": { 00:10:03.721 "read": true, 00:10:03.721 "write": true, 00:10:03.721 "unmap": true, 00:10:03.721 "flush": true, 00:10:03.721 "reset": true, 00:10:03.721 "nvme_admin": false, 00:10:03.721 "nvme_io": false, 00:10:03.721 "nvme_io_md": false, 00:10:03.721 "write_zeroes": true, 00:10:03.721 "zcopy": true, 00:10:03.721 "get_zone_info": false, 00:10:03.721 "zone_management": false, 00:10:03.721 "zone_append": false, 00:10:03.721 "compare": false, 00:10:03.721 "compare_and_write": false, 00:10:03.721 "abort": true, 00:10:03.721 "seek_hole": false, 00:10:03.721 "seek_data": false, 00:10:03.721 "copy": true, 00:10:03.721 "nvme_iov_md": false 00:10:03.721 }, 00:10:03.721 "memory_domains": [ 00:10:03.721 { 00:10:03.721 "dma_device_id": "system", 00:10:03.721 "dma_device_type": 1 00:10:03.721 }, 00:10:03.721 { 00:10:03.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.721 "dma_device_type": 2 00:10:03.721 } 00:10:03.721 ], 00:10:03.721 "driver_specific": {} 00:10:03.721 } 00:10:03.721 ] 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.721 03:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.980 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.980 "name": "Existed_Raid", 00:10:03.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.981 "strip_size_kb": 0, 00:10:03.981 "state": "configuring", 00:10:03.981 "raid_level": "raid1", 00:10:03.981 "superblock": false, 00:10:03.981 "num_base_bdevs": 3, 00:10:03.981 "num_base_bdevs_discovered": 2, 00:10:03.981 "num_base_bdevs_operational": 3, 00:10:03.981 "base_bdevs_list": [ 00:10:03.981 { 00:10:03.981 "name": "BaseBdev1", 00:10:03.981 "uuid": "c01756db-07f3-4e60-93d2-b736904b1623", 00:10:03.981 "is_configured": true, 00:10:03.981 "data_offset": 0, 00:10:03.981 "data_size": 65536 00:10:03.981 }, 00:10:03.981 { 00:10:03.981 "name": null, 00:10:03.981 "uuid": "57c00277-38d3-4a54-9472-34daf7f064e3", 00:10:03.981 "is_configured": false, 00:10:03.981 "data_offset": 0, 00:10:03.981 "data_size": 65536 00:10:03.981 }, 00:10:03.981 { 00:10:03.981 "name": "BaseBdev3", 00:10:03.981 "uuid": "db604be7-352b-42da-b883-12fe48408054", 00:10:03.981 "is_configured": true, 00:10:03.981 "data_offset": 0, 00:10:03.981 "data_size": 65536 00:10:03.981 } 00:10:03.981 ] 00:10:03.981 }' 00:10:03.981 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.981 03:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.240 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:04.240 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.240 03:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.240 03:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.240 03:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.240 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:04.240 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:04.240 03:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.240 03:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.240 [2024-11-21 03:18:51.725987] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:04.240 03:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.240 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:04.240 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.240 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.240 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.240 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.240 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.240 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.240 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.240 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.240 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.240 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.240 03:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.240 03:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.240 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.240 03:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.241 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.241 "name": "Existed_Raid", 00:10:04.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.241 "strip_size_kb": 0, 00:10:04.241 "state": "configuring", 00:10:04.241 "raid_level": "raid1", 00:10:04.241 "superblock": false, 00:10:04.241 "num_base_bdevs": 3, 00:10:04.241 "num_base_bdevs_discovered": 1, 00:10:04.241 "num_base_bdevs_operational": 3, 00:10:04.241 "base_bdevs_list": [ 00:10:04.241 { 00:10:04.241 "name": "BaseBdev1", 00:10:04.241 "uuid": "c01756db-07f3-4e60-93d2-b736904b1623", 00:10:04.241 "is_configured": true, 00:10:04.241 "data_offset": 0, 00:10:04.241 "data_size": 65536 00:10:04.241 }, 00:10:04.241 { 00:10:04.241 "name": null, 00:10:04.241 "uuid": "57c00277-38d3-4a54-9472-34daf7f064e3", 00:10:04.241 "is_configured": false, 00:10:04.241 "data_offset": 0, 00:10:04.241 "data_size": 65536 00:10:04.241 }, 00:10:04.241 { 00:10:04.241 "name": null, 00:10:04.241 "uuid": "db604be7-352b-42da-b883-12fe48408054", 00:10:04.241 "is_configured": false, 00:10:04.241 "data_offset": 0, 00:10:04.241 "data_size": 65536 00:10:04.241 } 00:10:04.241 ] 00:10:04.241 }' 00:10:04.241 03:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.241 03:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.811 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.811 03:18:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.811 03:18:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.811 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:04.811 03:18:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.811 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:04.811 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:04.811 03:18:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.811 03:18:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.811 [2024-11-21 03:18:52.210185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:04.811 03:18:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.811 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:04.811 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.811 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.811 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.811 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.811 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.811 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.811 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.811 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.811 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.811 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.811 03:18:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.811 03:18:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.811 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.811 03:18:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.811 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.811 "name": "Existed_Raid", 00:10:04.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.811 "strip_size_kb": 0, 00:10:04.811 "state": "configuring", 00:10:04.811 "raid_level": "raid1", 00:10:04.811 "superblock": false, 00:10:04.811 "num_base_bdevs": 3, 00:10:04.811 "num_base_bdevs_discovered": 2, 00:10:04.811 "num_base_bdevs_operational": 3, 00:10:04.811 "base_bdevs_list": [ 00:10:04.811 { 00:10:04.811 "name": "BaseBdev1", 00:10:04.811 "uuid": "c01756db-07f3-4e60-93d2-b736904b1623", 00:10:04.811 "is_configured": true, 00:10:04.811 "data_offset": 0, 00:10:04.811 "data_size": 65536 00:10:04.811 }, 00:10:04.811 { 00:10:04.811 "name": null, 00:10:04.811 "uuid": "57c00277-38d3-4a54-9472-34daf7f064e3", 00:10:04.811 "is_configured": false, 00:10:04.811 "data_offset": 0, 00:10:04.811 "data_size": 65536 00:10:04.811 }, 00:10:04.811 { 00:10:04.811 "name": "BaseBdev3", 00:10:04.811 "uuid": "db604be7-352b-42da-b883-12fe48408054", 00:10:04.811 "is_configured": true, 00:10:04.811 "data_offset": 0, 00:10:04.811 "data_size": 65536 00:10:04.811 } 00:10:04.811 ] 00:10:04.811 }' 00:10:04.811 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.811 03:18:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.458 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:05.458 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.458 03:18:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.458 03:18:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.458 03:18:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.458 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:05.458 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:05.458 03:18:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.458 03:18:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.458 [2024-11-21 03:18:52.678331] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:05.458 03:18:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.458 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:05.458 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.458 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.458 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.458 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.458 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.458 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.458 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.458 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.458 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.458 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.458 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.458 03:18:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.458 03:18:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.458 03:18:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.458 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.458 "name": "Existed_Raid", 00:10:05.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.458 "strip_size_kb": 0, 00:10:05.458 "state": "configuring", 00:10:05.458 "raid_level": "raid1", 00:10:05.458 "superblock": false, 00:10:05.458 "num_base_bdevs": 3, 00:10:05.458 "num_base_bdevs_discovered": 1, 00:10:05.458 "num_base_bdevs_operational": 3, 00:10:05.458 "base_bdevs_list": [ 00:10:05.458 { 00:10:05.458 "name": null, 00:10:05.458 "uuid": "c01756db-07f3-4e60-93d2-b736904b1623", 00:10:05.458 "is_configured": false, 00:10:05.458 "data_offset": 0, 00:10:05.458 "data_size": 65536 00:10:05.458 }, 00:10:05.458 { 00:10:05.458 "name": null, 00:10:05.458 "uuid": "57c00277-38d3-4a54-9472-34daf7f064e3", 00:10:05.458 "is_configured": false, 00:10:05.458 "data_offset": 0, 00:10:05.458 "data_size": 65536 00:10:05.458 }, 00:10:05.458 { 00:10:05.458 "name": "BaseBdev3", 00:10:05.458 "uuid": "db604be7-352b-42da-b883-12fe48408054", 00:10:05.458 "is_configured": true, 00:10:05.458 "data_offset": 0, 00:10:05.458 "data_size": 65536 00:10:05.458 } 00:10:05.458 ] 00:10:05.458 }' 00:10:05.458 03:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.458 03:18:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.718 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.718 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.718 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:05.718 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.718 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.718 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:05.718 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:05.718 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.718 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.718 [2024-11-21 03:18:53.158380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:05.718 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.718 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:05.718 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.718 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.718 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.718 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.718 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.718 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.718 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.718 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.718 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.718 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.718 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.718 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.718 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.718 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.718 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.718 "name": "Existed_Raid", 00:10:05.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.718 "strip_size_kb": 0, 00:10:05.718 "state": "configuring", 00:10:05.718 "raid_level": "raid1", 00:10:05.718 "superblock": false, 00:10:05.718 "num_base_bdevs": 3, 00:10:05.718 "num_base_bdevs_discovered": 2, 00:10:05.718 "num_base_bdevs_operational": 3, 00:10:05.718 "base_bdevs_list": [ 00:10:05.718 { 00:10:05.718 "name": null, 00:10:05.718 "uuid": "c01756db-07f3-4e60-93d2-b736904b1623", 00:10:05.718 "is_configured": false, 00:10:05.718 "data_offset": 0, 00:10:05.718 "data_size": 65536 00:10:05.718 }, 00:10:05.718 { 00:10:05.718 "name": "BaseBdev2", 00:10:05.718 "uuid": "57c00277-38d3-4a54-9472-34daf7f064e3", 00:10:05.718 "is_configured": true, 00:10:05.718 "data_offset": 0, 00:10:05.718 "data_size": 65536 00:10:05.718 }, 00:10:05.718 { 00:10:05.718 "name": "BaseBdev3", 00:10:05.718 "uuid": "db604be7-352b-42da-b883-12fe48408054", 00:10:05.718 "is_configured": true, 00:10:05.718 "data_offset": 0, 00:10:05.718 "data_size": 65536 00:10:05.718 } 00:10:05.718 ] 00:10:05.718 }' 00:10:05.718 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.718 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.287 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.287 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.287 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.287 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:06.287 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.287 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:06.287 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.287 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.287 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.287 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:06.287 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.287 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c01756db-07f3-4e60-93d2-b736904b1623 00:10:06.287 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.287 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.287 [2024-11-21 03:18:53.727578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:06.287 [2024-11-21 03:18:53.727736] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:06.287 [2024-11-21 03:18:53.727766] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:06.287 [2024-11-21 03:18:53.728078] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:10:06.287 [2024-11-21 03:18:53.728274] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:06.287 [2024-11-21 03:18:53.728316] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:06.287 [2024-11-21 03:18:53.728573] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:06.287 NewBaseBdev 00:10:06.287 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.287 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:06.287 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:06.287 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.287 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:06.287 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.287 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.287 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.287 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.287 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.287 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.287 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:06.287 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.287 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.287 [ 00:10:06.287 { 00:10:06.287 "name": "NewBaseBdev", 00:10:06.287 "aliases": [ 00:10:06.287 "c01756db-07f3-4e60-93d2-b736904b1623" 00:10:06.287 ], 00:10:06.288 "product_name": "Malloc disk", 00:10:06.288 "block_size": 512, 00:10:06.288 "num_blocks": 65536, 00:10:06.288 "uuid": "c01756db-07f3-4e60-93d2-b736904b1623", 00:10:06.288 "assigned_rate_limits": { 00:10:06.288 "rw_ios_per_sec": 0, 00:10:06.288 "rw_mbytes_per_sec": 0, 00:10:06.288 "r_mbytes_per_sec": 0, 00:10:06.288 "w_mbytes_per_sec": 0 00:10:06.288 }, 00:10:06.288 "claimed": true, 00:10:06.288 "claim_type": "exclusive_write", 00:10:06.288 "zoned": false, 00:10:06.288 "supported_io_types": { 00:10:06.288 "read": true, 00:10:06.288 "write": true, 00:10:06.288 "unmap": true, 00:10:06.288 "flush": true, 00:10:06.288 "reset": true, 00:10:06.288 "nvme_admin": false, 00:10:06.288 "nvme_io": false, 00:10:06.288 "nvme_io_md": false, 00:10:06.288 "write_zeroes": true, 00:10:06.288 "zcopy": true, 00:10:06.288 "get_zone_info": false, 00:10:06.288 "zone_management": false, 00:10:06.288 "zone_append": false, 00:10:06.288 "compare": false, 00:10:06.288 "compare_and_write": false, 00:10:06.288 "abort": true, 00:10:06.288 "seek_hole": false, 00:10:06.288 "seek_data": false, 00:10:06.288 "copy": true, 00:10:06.288 "nvme_iov_md": false 00:10:06.288 }, 00:10:06.288 "memory_domains": [ 00:10:06.288 { 00:10:06.288 "dma_device_id": "system", 00:10:06.288 "dma_device_type": 1 00:10:06.288 }, 00:10:06.288 { 00:10:06.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.288 "dma_device_type": 2 00:10:06.288 } 00:10:06.288 ], 00:10:06.288 "driver_specific": {} 00:10:06.288 } 00:10:06.288 ] 00:10:06.288 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.288 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:06.288 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:06.288 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.288 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:06.288 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.288 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.288 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.288 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.288 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.288 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.288 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.288 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.288 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.288 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.288 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.288 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.288 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.288 "name": "Existed_Raid", 00:10:06.288 "uuid": "e73dbdfd-c56e-42f7-9389-9cca5bcb89dd", 00:10:06.288 "strip_size_kb": 0, 00:10:06.288 "state": "online", 00:10:06.288 "raid_level": "raid1", 00:10:06.288 "superblock": false, 00:10:06.288 "num_base_bdevs": 3, 00:10:06.288 "num_base_bdevs_discovered": 3, 00:10:06.288 "num_base_bdevs_operational": 3, 00:10:06.288 "base_bdevs_list": [ 00:10:06.288 { 00:10:06.288 "name": "NewBaseBdev", 00:10:06.288 "uuid": "c01756db-07f3-4e60-93d2-b736904b1623", 00:10:06.288 "is_configured": true, 00:10:06.288 "data_offset": 0, 00:10:06.288 "data_size": 65536 00:10:06.288 }, 00:10:06.288 { 00:10:06.288 "name": "BaseBdev2", 00:10:06.288 "uuid": "57c00277-38d3-4a54-9472-34daf7f064e3", 00:10:06.288 "is_configured": true, 00:10:06.288 "data_offset": 0, 00:10:06.288 "data_size": 65536 00:10:06.288 }, 00:10:06.288 { 00:10:06.288 "name": "BaseBdev3", 00:10:06.288 "uuid": "db604be7-352b-42da-b883-12fe48408054", 00:10:06.288 "is_configured": true, 00:10:06.288 "data_offset": 0, 00:10:06.288 "data_size": 65536 00:10:06.288 } 00:10:06.288 ] 00:10:06.288 }' 00:10:06.288 03:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.288 03:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.856 03:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:06.856 03:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:06.856 03:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:06.856 03:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:06.856 03:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:06.856 03:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:06.856 03:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:06.856 03:18:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.856 03:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:06.856 03:18:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.856 [2024-11-21 03:18:54.240180] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:06.857 03:18:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.857 03:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:06.857 "name": "Existed_Raid", 00:10:06.857 "aliases": [ 00:10:06.857 "e73dbdfd-c56e-42f7-9389-9cca5bcb89dd" 00:10:06.857 ], 00:10:06.857 "product_name": "Raid Volume", 00:10:06.857 "block_size": 512, 00:10:06.857 "num_blocks": 65536, 00:10:06.857 "uuid": "e73dbdfd-c56e-42f7-9389-9cca5bcb89dd", 00:10:06.857 "assigned_rate_limits": { 00:10:06.857 "rw_ios_per_sec": 0, 00:10:06.857 "rw_mbytes_per_sec": 0, 00:10:06.857 "r_mbytes_per_sec": 0, 00:10:06.857 "w_mbytes_per_sec": 0 00:10:06.857 }, 00:10:06.857 "claimed": false, 00:10:06.857 "zoned": false, 00:10:06.857 "supported_io_types": { 00:10:06.857 "read": true, 00:10:06.857 "write": true, 00:10:06.857 "unmap": false, 00:10:06.857 "flush": false, 00:10:06.857 "reset": true, 00:10:06.857 "nvme_admin": false, 00:10:06.857 "nvme_io": false, 00:10:06.857 "nvme_io_md": false, 00:10:06.857 "write_zeroes": true, 00:10:06.857 "zcopy": false, 00:10:06.857 "get_zone_info": false, 00:10:06.857 "zone_management": false, 00:10:06.857 "zone_append": false, 00:10:06.857 "compare": false, 00:10:06.857 "compare_and_write": false, 00:10:06.857 "abort": false, 00:10:06.857 "seek_hole": false, 00:10:06.857 "seek_data": false, 00:10:06.857 "copy": false, 00:10:06.857 "nvme_iov_md": false 00:10:06.857 }, 00:10:06.857 "memory_domains": [ 00:10:06.857 { 00:10:06.857 "dma_device_id": "system", 00:10:06.857 "dma_device_type": 1 00:10:06.857 }, 00:10:06.857 { 00:10:06.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.857 "dma_device_type": 2 00:10:06.857 }, 00:10:06.857 { 00:10:06.857 "dma_device_id": "system", 00:10:06.857 "dma_device_type": 1 00:10:06.857 }, 00:10:06.857 { 00:10:06.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.857 "dma_device_type": 2 00:10:06.857 }, 00:10:06.857 { 00:10:06.857 "dma_device_id": "system", 00:10:06.857 "dma_device_type": 1 00:10:06.857 }, 00:10:06.857 { 00:10:06.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.857 "dma_device_type": 2 00:10:06.857 } 00:10:06.857 ], 00:10:06.857 "driver_specific": { 00:10:06.857 "raid": { 00:10:06.857 "uuid": "e73dbdfd-c56e-42f7-9389-9cca5bcb89dd", 00:10:06.857 "strip_size_kb": 0, 00:10:06.857 "state": "online", 00:10:06.857 "raid_level": "raid1", 00:10:06.857 "superblock": false, 00:10:06.857 "num_base_bdevs": 3, 00:10:06.857 "num_base_bdevs_discovered": 3, 00:10:06.857 "num_base_bdevs_operational": 3, 00:10:06.857 "base_bdevs_list": [ 00:10:06.857 { 00:10:06.857 "name": "NewBaseBdev", 00:10:06.857 "uuid": "c01756db-07f3-4e60-93d2-b736904b1623", 00:10:06.857 "is_configured": true, 00:10:06.857 "data_offset": 0, 00:10:06.857 "data_size": 65536 00:10:06.857 }, 00:10:06.857 { 00:10:06.857 "name": "BaseBdev2", 00:10:06.857 "uuid": "57c00277-38d3-4a54-9472-34daf7f064e3", 00:10:06.857 "is_configured": true, 00:10:06.857 "data_offset": 0, 00:10:06.857 "data_size": 65536 00:10:06.857 }, 00:10:06.857 { 00:10:06.857 "name": "BaseBdev3", 00:10:06.857 "uuid": "db604be7-352b-42da-b883-12fe48408054", 00:10:06.857 "is_configured": true, 00:10:06.857 "data_offset": 0, 00:10:06.857 "data_size": 65536 00:10:06.857 } 00:10:06.857 ] 00:10:06.857 } 00:10:06.857 } 00:10:06.857 }' 00:10:06.857 03:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:06.857 03:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:06.857 BaseBdev2 00:10:06.857 BaseBdev3' 00:10:06.857 03:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.857 03:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:06.857 03:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.857 03:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:06.857 03:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.857 03:18:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.857 03:18:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.857 03:18:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.116 03:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.116 03:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.116 03:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.116 03:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:07.116 03:18:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.116 03:18:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.116 03:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.117 03:18:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.117 03:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.117 03:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.117 03:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.117 03:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:07.117 03:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.117 03:18:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.117 03:18:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.117 03:18:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.117 03:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.117 03:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.117 03:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:07.117 03:18:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.117 03:18:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.117 [2024-11-21 03:18:54.531845] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:07.117 [2024-11-21 03:18:54.531920] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:07.117 [2024-11-21 03:18:54.532073] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:07.117 [2024-11-21 03:18:54.532405] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:07.117 [2024-11-21 03:18:54.532459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:07.117 03:18:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.117 03:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80484 00:10:07.117 03:18:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80484 ']' 00:10:07.117 03:18:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80484 00:10:07.117 03:18:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:07.117 03:18:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.117 03:18:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80484 00:10:07.117 03:18:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:07.117 03:18:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:07.117 03:18:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80484' 00:10:07.117 killing process with pid 80484 00:10:07.117 03:18:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 80484 00:10:07.117 [2024-11-21 03:18:54.581110] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:07.117 03:18:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 80484 00:10:07.117 [2024-11-21 03:18:54.639943] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:07.686 ************************************ 00:10:07.686 END TEST raid_state_function_test 00:10:07.686 ************************************ 00:10:07.686 03:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:07.686 00:10:07.686 real 0m9.169s 00:10:07.686 user 0m15.308s 00:10:07.686 sys 0m1.954s 00:10:07.686 03:18:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.686 03:18:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.686 03:18:55 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:10:07.686 03:18:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:07.686 03:18:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.686 03:18:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:07.686 ************************************ 00:10:07.686 START TEST raid_state_function_test_sb 00:10:07.686 ************************************ 00:10:07.686 03:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:10:07.686 03:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:07.686 03:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:07.686 03:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:07.686 03:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:07.686 03:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:07.686 03:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:07.686 03:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:07.686 03:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:07.686 03:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:07.686 03:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:07.686 03:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:07.686 03:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:07.686 03:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:07.686 03:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:07.686 03:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:07.686 03:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:07.686 03:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:07.686 03:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:07.686 03:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:07.686 03:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:07.686 03:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:07.686 03:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:07.686 03:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:07.686 03:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:07.686 03:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:07.686 03:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81094 00:10:07.687 03:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:07.687 03:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81094' 00:10:07.687 Process raid pid: 81094 00:10:07.687 03:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81094 00:10:07.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.687 03:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81094 ']' 00:10:07.687 03:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.687 03:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.687 03:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.687 03:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.687 03:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.687 [2024-11-21 03:18:55.142557] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:10:07.687 [2024-11-21 03:18:55.143383] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.946 [2024-11-21 03:18:55.285302] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:07.946 [2024-11-21 03:18:55.325782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.946 [2024-11-21 03:18:55.366708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.946 [2024-11-21 03:18:55.445184] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:07.946 [2024-11-21 03:18:55.445327] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.517 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.517 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:08.517 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:08.517 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.517 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.517 [2024-11-21 03:18:56.010888] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:08.517 [2024-11-21 03:18:56.011056] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:08.517 [2024-11-21 03:18:56.011079] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:08.517 [2024-11-21 03:18:56.011089] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:08.517 [2024-11-21 03:18:56.011103] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:08.517 [2024-11-21 03:18:56.011111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:08.517 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.517 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:08.517 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.517 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.517 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.517 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.517 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.517 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.517 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.517 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.517 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.517 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.517 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.517 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.517 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.517 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.517 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.517 "name": "Existed_Raid", 00:10:08.517 "uuid": "cf27f2c5-48b7-427d-9ca0-961c48b73520", 00:10:08.517 "strip_size_kb": 0, 00:10:08.517 "state": "configuring", 00:10:08.517 "raid_level": "raid1", 00:10:08.517 "superblock": true, 00:10:08.517 "num_base_bdevs": 3, 00:10:08.517 "num_base_bdevs_discovered": 0, 00:10:08.517 "num_base_bdevs_operational": 3, 00:10:08.517 "base_bdevs_list": [ 00:10:08.517 { 00:10:08.517 "name": "BaseBdev1", 00:10:08.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.517 "is_configured": false, 00:10:08.517 "data_offset": 0, 00:10:08.517 "data_size": 0 00:10:08.517 }, 00:10:08.517 { 00:10:08.517 "name": "BaseBdev2", 00:10:08.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.517 "is_configured": false, 00:10:08.517 "data_offset": 0, 00:10:08.517 "data_size": 0 00:10:08.517 }, 00:10:08.517 { 00:10:08.517 "name": "BaseBdev3", 00:10:08.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.517 "is_configured": false, 00:10:08.517 "data_offset": 0, 00:10:08.517 "data_size": 0 00:10:08.517 } 00:10:08.517 ] 00:10:08.517 }' 00:10:08.517 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.517 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.088 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:09.088 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.088 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.088 [2024-11-21 03:18:56.450895] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:09.088 [2024-11-21 03:18:56.450997] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:10:09.088 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.088 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:09.088 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.088 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.088 [2024-11-21 03:18:56.458908] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:09.088 [2024-11-21 03:18:56.459010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:09.088 [2024-11-21 03:18:56.459064] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:09.088 [2024-11-21 03:18:56.459090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:09.088 [2024-11-21 03:18:56.459129] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:09.088 [2024-11-21 03:18:56.459159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:09.088 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.088 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:09.088 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.088 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.088 [2024-11-21 03:18:56.482270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:09.088 BaseBdev1 00:10:09.088 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.088 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:09.088 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:09.088 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.088 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:09.088 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.088 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.088 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.088 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.088 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.088 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.088 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:09.088 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.088 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.088 [ 00:10:09.088 { 00:10:09.088 "name": "BaseBdev1", 00:10:09.088 "aliases": [ 00:10:09.088 "9dc9cfbe-46d0-4bcd-aab3-217ed58c790f" 00:10:09.088 ], 00:10:09.088 "product_name": "Malloc disk", 00:10:09.088 "block_size": 512, 00:10:09.088 "num_blocks": 65536, 00:10:09.088 "uuid": "9dc9cfbe-46d0-4bcd-aab3-217ed58c790f", 00:10:09.088 "assigned_rate_limits": { 00:10:09.088 "rw_ios_per_sec": 0, 00:10:09.088 "rw_mbytes_per_sec": 0, 00:10:09.088 "r_mbytes_per_sec": 0, 00:10:09.088 "w_mbytes_per_sec": 0 00:10:09.088 }, 00:10:09.088 "claimed": true, 00:10:09.088 "claim_type": "exclusive_write", 00:10:09.088 "zoned": false, 00:10:09.088 "supported_io_types": { 00:10:09.088 "read": true, 00:10:09.088 "write": true, 00:10:09.088 "unmap": true, 00:10:09.088 "flush": true, 00:10:09.088 "reset": true, 00:10:09.088 "nvme_admin": false, 00:10:09.088 "nvme_io": false, 00:10:09.088 "nvme_io_md": false, 00:10:09.088 "write_zeroes": true, 00:10:09.088 "zcopy": true, 00:10:09.088 "get_zone_info": false, 00:10:09.088 "zone_management": false, 00:10:09.088 "zone_append": false, 00:10:09.088 "compare": false, 00:10:09.088 "compare_and_write": false, 00:10:09.088 "abort": true, 00:10:09.088 "seek_hole": false, 00:10:09.088 "seek_data": false, 00:10:09.088 "copy": true, 00:10:09.088 "nvme_iov_md": false 00:10:09.088 }, 00:10:09.088 "memory_domains": [ 00:10:09.088 { 00:10:09.089 "dma_device_id": "system", 00:10:09.089 "dma_device_type": 1 00:10:09.089 }, 00:10:09.089 { 00:10:09.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.089 "dma_device_type": 2 00:10:09.089 } 00:10:09.089 ], 00:10:09.089 "driver_specific": {} 00:10:09.089 } 00:10:09.089 ] 00:10:09.089 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.089 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:09.089 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:09.089 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.089 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.089 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.089 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.089 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.089 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.089 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.089 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.089 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.089 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.089 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.089 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.089 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.089 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.089 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.089 "name": "Existed_Raid", 00:10:09.089 "uuid": "1f3b95e1-31f3-49b5-9423-ba63ece130cc", 00:10:09.089 "strip_size_kb": 0, 00:10:09.089 "state": "configuring", 00:10:09.089 "raid_level": "raid1", 00:10:09.089 "superblock": true, 00:10:09.089 "num_base_bdevs": 3, 00:10:09.089 "num_base_bdevs_discovered": 1, 00:10:09.089 "num_base_bdevs_operational": 3, 00:10:09.089 "base_bdevs_list": [ 00:10:09.089 { 00:10:09.089 "name": "BaseBdev1", 00:10:09.089 "uuid": "9dc9cfbe-46d0-4bcd-aab3-217ed58c790f", 00:10:09.089 "is_configured": true, 00:10:09.089 "data_offset": 2048, 00:10:09.089 "data_size": 63488 00:10:09.089 }, 00:10:09.089 { 00:10:09.089 "name": "BaseBdev2", 00:10:09.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.089 "is_configured": false, 00:10:09.089 "data_offset": 0, 00:10:09.089 "data_size": 0 00:10:09.089 }, 00:10:09.089 { 00:10:09.089 "name": "BaseBdev3", 00:10:09.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.089 "is_configured": false, 00:10:09.089 "data_offset": 0, 00:10:09.089 "data_size": 0 00:10:09.089 } 00:10:09.089 ] 00:10:09.089 }' 00:10:09.089 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.089 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.658 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:09.658 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.658 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.658 [2024-11-21 03:18:56.930454] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:09.658 [2024-11-21 03:18:56.930538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:09.658 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.658 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:09.658 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.658 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.658 [2024-11-21 03:18:56.942474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:09.658 [2024-11-21 03:18:56.944829] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:09.658 [2024-11-21 03:18:56.944873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:09.658 [2024-11-21 03:18:56.944886] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:09.658 [2024-11-21 03:18:56.944894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:09.658 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.658 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:09.658 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:09.658 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:09.658 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.658 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.658 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.658 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.658 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.658 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.658 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.658 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.658 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.658 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.658 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.658 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.658 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.658 03:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.658 03:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.658 "name": "Existed_Raid", 00:10:09.658 "uuid": "163ffb57-da63-41fc-be53-1de68b30b42b", 00:10:09.658 "strip_size_kb": 0, 00:10:09.658 "state": "configuring", 00:10:09.658 "raid_level": "raid1", 00:10:09.658 "superblock": true, 00:10:09.658 "num_base_bdevs": 3, 00:10:09.658 "num_base_bdevs_discovered": 1, 00:10:09.658 "num_base_bdevs_operational": 3, 00:10:09.658 "base_bdevs_list": [ 00:10:09.658 { 00:10:09.658 "name": "BaseBdev1", 00:10:09.658 "uuid": "9dc9cfbe-46d0-4bcd-aab3-217ed58c790f", 00:10:09.658 "is_configured": true, 00:10:09.658 "data_offset": 2048, 00:10:09.658 "data_size": 63488 00:10:09.658 }, 00:10:09.658 { 00:10:09.658 "name": "BaseBdev2", 00:10:09.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.658 "is_configured": false, 00:10:09.658 "data_offset": 0, 00:10:09.658 "data_size": 0 00:10:09.658 }, 00:10:09.658 { 00:10:09.658 "name": "BaseBdev3", 00:10:09.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.658 "is_configured": false, 00:10:09.658 "data_offset": 0, 00:10:09.658 "data_size": 0 00:10:09.658 } 00:10:09.658 ] 00:10:09.658 }' 00:10:09.658 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.658 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.918 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:09.918 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.918 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.918 [2024-11-21 03:18:57.435800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:09.918 BaseBdev2 00:10:09.918 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.918 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:09.918 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:09.918 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.918 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:09.918 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.918 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.918 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.918 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.918 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.918 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.918 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:09.918 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.918 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.918 [ 00:10:09.918 { 00:10:09.918 "name": "BaseBdev2", 00:10:09.918 "aliases": [ 00:10:09.918 "9cf864fd-e194-4b7f-8163-a967e2665751" 00:10:09.918 ], 00:10:09.918 "product_name": "Malloc disk", 00:10:09.918 "block_size": 512, 00:10:09.918 "num_blocks": 65536, 00:10:09.918 "uuid": "9cf864fd-e194-4b7f-8163-a967e2665751", 00:10:09.918 "assigned_rate_limits": { 00:10:09.918 "rw_ios_per_sec": 0, 00:10:09.918 "rw_mbytes_per_sec": 0, 00:10:09.918 "r_mbytes_per_sec": 0, 00:10:09.918 "w_mbytes_per_sec": 0 00:10:09.918 }, 00:10:09.918 "claimed": true, 00:10:09.918 "claim_type": "exclusive_write", 00:10:09.918 "zoned": false, 00:10:09.918 "supported_io_types": { 00:10:09.918 "read": true, 00:10:09.918 "write": true, 00:10:09.918 "unmap": true, 00:10:09.918 "flush": true, 00:10:09.918 "reset": true, 00:10:09.918 "nvme_admin": false, 00:10:09.918 "nvme_io": false, 00:10:09.918 "nvme_io_md": false, 00:10:09.918 "write_zeroes": true, 00:10:09.918 "zcopy": true, 00:10:09.918 "get_zone_info": false, 00:10:09.918 "zone_management": false, 00:10:09.918 "zone_append": false, 00:10:09.918 "compare": false, 00:10:09.918 "compare_and_write": false, 00:10:09.918 "abort": true, 00:10:09.918 "seek_hole": false, 00:10:09.918 "seek_data": false, 00:10:09.918 "copy": true, 00:10:09.918 "nvme_iov_md": false 00:10:09.918 }, 00:10:09.918 "memory_domains": [ 00:10:09.918 { 00:10:09.918 "dma_device_id": "system", 00:10:09.918 "dma_device_type": 1 00:10:09.918 }, 00:10:09.918 { 00:10:09.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.918 "dma_device_type": 2 00:10:09.918 } 00:10:09.918 ], 00:10:09.918 "driver_specific": {} 00:10:09.918 } 00:10:09.918 ] 00:10:09.918 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.918 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:09.918 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:09.918 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:09.918 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:09.918 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.918 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.918 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.918 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.918 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.918 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.918 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.918 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.918 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.177 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.177 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.177 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.177 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.177 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.177 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.177 "name": "Existed_Raid", 00:10:10.177 "uuid": "163ffb57-da63-41fc-be53-1de68b30b42b", 00:10:10.177 "strip_size_kb": 0, 00:10:10.177 "state": "configuring", 00:10:10.177 "raid_level": "raid1", 00:10:10.177 "superblock": true, 00:10:10.177 "num_base_bdevs": 3, 00:10:10.177 "num_base_bdevs_discovered": 2, 00:10:10.177 "num_base_bdevs_operational": 3, 00:10:10.177 "base_bdevs_list": [ 00:10:10.177 { 00:10:10.177 "name": "BaseBdev1", 00:10:10.177 "uuid": "9dc9cfbe-46d0-4bcd-aab3-217ed58c790f", 00:10:10.177 "is_configured": true, 00:10:10.177 "data_offset": 2048, 00:10:10.177 "data_size": 63488 00:10:10.177 }, 00:10:10.177 { 00:10:10.177 "name": "BaseBdev2", 00:10:10.177 "uuid": "9cf864fd-e194-4b7f-8163-a967e2665751", 00:10:10.177 "is_configured": true, 00:10:10.177 "data_offset": 2048, 00:10:10.177 "data_size": 63488 00:10:10.177 }, 00:10:10.177 { 00:10:10.177 "name": "BaseBdev3", 00:10:10.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.177 "is_configured": false, 00:10:10.177 "data_offset": 0, 00:10:10.177 "data_size": 0 00:10:10.177 } 00:10:10.177 ] 00:10:10.177 }' 00:10:10.177 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.177 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.436 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:10.436 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.436 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.436 [2024-11-21 03:18:57.947385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:10.436 [2024-11-21 03:18:57.947773] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:10.436 [2024-11-21 03:18:57.947845] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:10.436 [2024-11-21 03:18:57.948312] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:10.436 BaseBdev3 00:10:10.436 [2024-11-21 03:18:57.948561] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:10.436 [2024-11-21 03:18:57.948620] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:10:10.436 [2024-11-21 03:18:57.948827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.436 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.436 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:10.436 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:10.436 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.437 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:10.437 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.437 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.437 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.437 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.437 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.437 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.437 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:10.437 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.437 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.437 [ 00:10:10.437 { 00:10:10.437 "name": "BaseBdev3", 00:10:10.437 "aliases": [ 00:10:10.437 "0e783b2f-ecd0-4d2f-9cb5-3c15972a265f" 00:10:10.437 ], 00:10:10.437 "product_name": "Malloc disk", 00:10:10.437 "block_size": 512, 00:10:10.437 "num_blocks": 65536, 00:10:10.437 "uuid": "0e783b2f-ecd0-4d2f-9cb5-3c15972a265f", 00:10:10.437 "assigned_rate_limits": { 00:10:10.437 "rw_ios_per_sec": 0, 00:10:10.437 "rw_mbytes_per_sec": 0, 00:10:10.437 "r_mbytes_per_sec": 0, 00:10:10.437 "w_mbytes_per_sec": 0 00:10:10.437 }, 00:10:10.437 "claimed": true, 00:10:10.437 "claim_type": "exclusive_write", 00:10:10.437 "zoned": false, 00:10:10.437 "supported_io_types": { 00:10:10.437 "read": true, 00:10:10.437 "write": true, 00:10:10.437 "unmap": true, 00:10:10.437 "flush": true, 00:10:10.437 "reset": true, 00:10:10.437 "nvme_admin": false, 00:10:10.437 "nvme_io": false, 00:10:10.437 "nvme_io_md": false, 00:10:10.437 "write_zeroes": true, 00:10:10.437 "zcopy": true, 00:10:10.437 "get_zone_info": false, 00:10:10.437 "zone_management": false, 00:10:10.437 "zone_append": false, 00:10:10.437 "compare": false, 00:10:10.437 "compare_and_write": false, 00:10:10.437 "abort": true, 00:10:10.437 "seek_hole": false, 00:10:10.437 "seek_data": false, 00:10:10.437 "copy": true, 00:10:10.437 "nvme_iov_md": false 00:10:10.437 }, 00:10:10.437 "memory_domains": [ 00:10:10.437 { 00:10:10.437 "dma_device_id": "system", 00:10:10.437 "dma_device_type": 1 00:10:10.437 }, 00:10:10.437 { 00:10:10.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.437 "dma_device_type": 2 00:10:10.437 } 00:10:10.437 ], 00:10:10.437 "driver_specific": {} 00:10:10.437 } 00:10:10.437 ] 00:10:10.437 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.437 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:10.437 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:10.437 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:10.437 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:10.437 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.437 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.437 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.437 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.437 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.437 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.437 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.437 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.437 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.437 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.437 03:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.437 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.437 03:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.696 03:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.696 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.696 "name": "Existed_Raid", 00:10:10.696 "uuid": "163ffb57-da63-41fc-be53-1de68b30b42b", 00:10:10.696 "strip_size_kb": 0, 00:10:10.696 "state": "online", 00:10:10.696 "raid_level": "raid1", 00:10:10.696 "superblock": true, 00:10:10.696 "num_base_bdevs": 3, 00:10:10.696 "num_base_bdevs_discovered": 3, 00:10:10.696 "num_base_bdevs_operational": 3, 00:10:10.696 "base_bdevs_list": [ 00:10:10.696 { 00:10:10.697 "name": "BaseBdev1", 00:10:10.697 "uuid": "9dc9cfbe-46d0-4bcd-aab3-217ed58c790f", 00:10:10.697 "is_configured": true, 00:10:10.697 "data_offset": 2048, 00:10:10.697 "data_size": 63488 00:10:10.697 }, 00:10:10.697 { 00:10:10.697 "name": "BaseBdev2", 00:10:10.697 "uuid": "9cf864fd-e194-4b7f-8163-a967e2665751", 00:10:10.697 "is_configured": true, 00:10:10.697 "data_offset": 2048, 00:10:10.697 "data_size": 63488 00:10:10.697 }, 00:10:10.697 { 00:10:10.697 "name": "BaseBdev3", 00:10:10.697 "uuid": "0e783b2f-ecd0-4d2f-9cb5-3c15972a265f", 00:10:10.697 "is_configured": true, 00:10:10.697 "data_offset": 2048, 00:10:10.697 "data_size": 63488 00:10:10.697 } 00:10:10.697 ] 00:10:10.697 }' 00:10:10.697 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.697 03:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.956 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:10.957 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:10.957 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:10.957 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:10.957 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:10.957 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:10.957 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:10.957 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:10.957 03:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.957 03:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.957 [2024-11-21 03:18:58.383952] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.957 03:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.957 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:10.957 "name": "Existed_Raid", 00:10:10.957 "aliases": [ 00:10:10.957 "163ffb57-da63-41fc-be53-1de68b30b42b" 00:10:10.957 ], 00:10:10.957 "product_name": "Raid Volume", 00:10:10.957 "block_size": 512, 00:10:10.957 "num_blocks": 63488, 00:10:10.957 "uuid": "163ffb57-da63-41fc-be53-1de68b30b42b", 00:10:10.957 "assigned_rate_limits": { 00:10:10.957 "rw_ios_per_sec": 0, 00:10:10.957 "rw_mbytes_per_sec": 0, 00:10:10.957 "r_mbytes_per_sec": 0, 00:10:10.957 "w_mbytes_per_sec": 0 00:10:10.957 }, 00:10:10.957 "claimed": false, 00:10:10.957 "zoned": false, 00:10:10.957 "supported_io_types": { 00:10:10.957 "read": true, 00:10:10.957 "write": true, 00:10:10.957 "unmap": false, 00:10:10.957 "flush": false, 00:10:10.957 "reset": true, 00:10:10.957 "nvme_admin": false, 00:10:10.957 "nvme_io": false, 00:10:10.957 "nvme_io_md": false, 00:10:10.957 "write_zeroes": true, 00:10:10.957 "zcopy": false, 00:10:10.957 "get_zone_info": false, 00:10:10.957 "zone_management": false, 00:10:10.957 "zone_append": false, 00:10:10.957 "compare": false, 00:10:10.957 "compare_and_write": false, 00:10:10.957 "abort": false, 00:10:10.957 "seek_hole": false, 00:10:10.957 "seek_data": false, 00:10:10.957 "copy": false, 00:10:10.957 "nvme_iov_md": false 00:10:10.957 }, 00:10:10.957 "memory_domains": [ 00:10:10.957 { 00:10:10.957 "dma_device_id": "system", 00:10:10.957 "dma_device_type": 1 00:10:10.957 }, 00:10:10.957 { 00:10:10.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.957 "dma_device_type": 2 00:10:10.957 }, 00:10:10.957 { 00:10:10.957 "dma_device_id": "system", 00:10:10.957 "dma_device_type": 1 00:10:10.957 }, 00:10:10.957 { 00:10:10.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.957 "dma_device_type": 2 00:10:10.957 }, 00:10:10.957 { 00:10:10.957 "dma_device_id": "system", 00:10:10.957 "dma_device_type": 1 00:10:10.957 }, 00:10:10.957 { 00:10:10.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.957 "dma_device_type": 2 00:10:10.957 } 00:10:10.957 ], 00:10:10.957 "driver_specific": { 00:10:10.957 "raid": { 00:10:10.957 "uuid": "163ffb57-da63-41fc-be53-1de68b30b42b", 00:10:10.957 "strip_size_kb": 0, 00:10:10.957 "state": "online", 00:10:10.957 "raid_level": "raid1", 00:10:10.957 "superblock": true, 00:10:10.957 "num_base_bdevs": 3, 00:10:10.957 "num_base_bdevs_discovered": 3, 00:10:10.957 "num_base_bdevs_operational": 3, 00:10:10.957 "base_bdevs_list": [ 00:10:10.957 { 00:10:10.957 "name": "BaseBdev1", 00:10:10.957 "uuid": "9dc9cfbe-46d0-4bcd-aab3-217ed58c790f", 00:10:10.957 "is_configured": true, 00:10:10.957 "data_offset": 2048, 00:10:10.957 "data_size": 63488 00:10:10.957 }, 00:10:10.957 { 00:10:10.957 "name": "BaseBdev2", 00:10:10.957 "uuid": "9cf864fd-e194-4b7f-8163-a967e2665751", 00:10:10.957 "is_configured": true, 00:10:10.957 "data_offset": 2048, 00:10:10.957 "data_size": 63488 00:10:10.957 }, 00:10:10.957 { 00:10:10.957 "name": "BaseBdev3", 00:10:10.957 "uuid": "0e783b2f-ecd0-4d2f-9cb5-3c15972a265f", 00:10:10.957 "is_configured": true, 00:10:10.957 "data_offset": 2048, 00:10:10.957 "data_size": 63488 00:10:10.957 } 00:10:10.957 ] 00:10:10.957 } 00:10:10.957 } 00:10:10.957 }' 00:10:10.957 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:10.957 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:10.957 BaseBdev2 00:10:10.957 BaseBdev3' 00:10:10.957 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.957 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:10.957 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.957 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:10.957 03:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.957 03:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.957 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.217 [2024-11-21 03:18:58.667829] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.217 "name": "Existed_Raid", 00:10:11.217 "uuid": "163ffb57-da63-41fc-be53-1de68b30b42b", 00:10:11.217 "strip_size_kb": 0, 00:10:11.217 "state": "online", 00:10:11.217 "raid_level": "raid1", 00:10:11.217 "superblock": true, 00:10:11.217 "num_base_bdevs": 3, 00:10:11.217 "num_base_bdevs_discovered": 2, 00:10:11.217 "num_base_bdevs_operational": 2, 00:10:11.217 "base_bdevs_list": [ 00:10:11.217 { 00:10:11.217 "name": null, 00:10:11.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.217 "is_configured": false, 00:10:11.217 "data_offset": 0, 00:10:11.217 "data_size": 63488 00:10:11.217 }, 00:10:11.217 { 00:10:11.217 "name": "BaseBdev2", 00:10:11.217 "uuid": "9cf864fd-e194-4b7f-8163-a967e2665751", 00:10:11.217 "is_configured": true, 00:10:11.217 "data_offset": 2048, 00:10:11.217 "data_size": 63488 00:10:11.217 }, 00:10:11.217 { 00:10:11.217 "name": "BaseBdev3", 00:10:11.217 "uuid": "0e783b2f-ecd0-4d2f-9cb5-3c15972a265f", 00:10:11.217 "is_configured": true, 00:10:11.217 "data_offset": 2048, 00:10:11.217 "data_size": 63488 00:10:11.217 } 00:10:11.217 ] 00:10:11.217 }' 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.217 03:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.877 [2024-11-21 03:18:59.169731] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.877 [2024-11-21 03:18:59.251548] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:11.877 [2024-11-21 03:18:59.251707] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:11.877 [2024-11-21 03:18:59.273648] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.877 [2024-11-21 03:18:59.273791] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:11.877 [2024-11-21 03:18:59.273845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.877 BaseBdev2 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.877 [ 00:10:11.877 { 00:10:11.877 "name": "BaseBdev2", 00:10:11.877 "aliases": [ 00:10:11.877 "fcf01b7a-092d-4bd9-87a9-dfe51157b8b1" 00:10:11.877 ], 00:10:11.877 "product_name": "Malloc disk", 00:10:11.877 "block_size": 512, 00:10:11.877 "num_blocks": 65536, 00:10:11.877 "uuid": "fcf01b7a-092d-4bd9-87a9-dfe51157b8b1", 00:10:11.877 "assigned_rate_limits": { 00:10:11.877 "rw_ios_per_sec": 0, 00:10:11.877 "rw_mbytes_per_sec": 0, 00:10:11.877 "r_mbytes_per_sec": 0, 00:10:11.877 "w_mbytes_per_sec": 0 00:10:11.877 }, 00:10:11.877 "claimed": false, 00:10:11.877 "zoned": false, 00:10:11.877 "supported_io_types": { 00:10:11.877 "read": true, 00:10:11.877 "write": true, 00:10:11.877 "unmap": true, 00:10:11.877 "flush": true, 00:10:11.877 "reset": true, 00:10:11.877 "nvme_admin": false, 00:10:11.877 "nvme_io": false, 00:10:11.877 "nvme_io_md": false, 00:10:11.877 "write_zeroes": true, 00:10:11.877 "zcopy": true, 00:10:11.877 "get_zone_info": false, 00:10:11.877 "zone_management": false, 00:10:11.877 "zone_append": false, 00:10:11.877 "compare": false, 00:10:11.877 "compare_and_write": false, 00:10:11.877 "abort": true, 00:10:11.877 "seek_hole": false, 00:10:11.877 "seek_data": false, 00:10:11.877 "copy": true, 00:10:11.877 "nvme_iov_md": false 00:10:11.877 }, 00:10:11.877 "memory_domains": [ 00:10:11.877 { 00:10:11.877 "dma_device_id": "system", 00:10:11.877 "dma_device_type": 1 00:10:11.877 }, 00:10:11.877 { 00:10:11.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.877 "dma_device_type": 2 00:10:11.877 } 00:10:11.877 ], 00:10:11.877 "driver_specific": {} 00:10:11.877 } 00:10:11.877 ] 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.877 BaseBdev3 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.877 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.136 [ 00:10:12.137 { 00:10:12.137 "name": "BaseBdev3", 00:10:12.137 "aliases": [ 00:10:12.137 "f0857481-3821-4e70-b58b-5c93d940a122" 00:10:12.137 ], 00:10:12.137 "product_name": "Malloc disk", 00:10:12.137 "block_size": 512, 00:10:12.137 "num_blocks": 65536, 00:10:12.137 "uuid": "f0857481-3821-4e70-b58b-5c93d940a122", 00:10:12.137 "assigned_rate_limits": { 00:10:12.137 "rw_ios_per_sec": 0, 00:10:12.137 "rw_mbytes_per_sec": 0, 00:10:12.137 "r_mbytes_per_sec": 0, 00:10:12.137 "w_mbytes_per_sec": 0 00:10:12.137 }, 00:10:12.137 "claimed": false, 00:10:12.137 "zoned": false, 00:10:12.137 "supported_io_types": { 00:10:12.137 "read": true, 00:10:12.137 "write": true, 00:10:12.137 "unmap": true, 00:10:12.137 "flush": true, 00:10:12.137 "reset": true, 00:10:12.137 "nvme_admin": false, 00:10:12.137 "nvme_io": false, 00:10:12.137 "nvme_io_md": false, 00:10:12.137 "write_zeroes": true, 00:10:12.137 "zcopy": true, 00:10:12.137 "get_zone_info": false, 00:10:12.137 "zone_management": false, 00:10:12.137 "zone_append": false, 00:10:12.137 "compare": false, 00:10:12.137 "compare_and_write": false, 00:10:12.137 "abort": true, 00:10:12.137 "seek_hole": false, 00:10:12.137 "seek_data": false, 00:10:12.137 "copy": true, 00:10:12.137 "nvme_iov_md": false 00:10:12.137 }, 00:10:12.137 "memory_domains": [ 00:10:12.137 { 00:10:12.137 "dma_device_id": "system", 00:10:12.137 "dma_device_type": 1 00:10:12.137 }, 00:10:12.137 { 00:10:12.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.137 "dma_device_type": 2 00:10:12.137 } 00:10:12.137 ], 00:10:12.137 "driver_specific": {} 00:10:12.137 } 00:10:12.137 ] 00:10:12.137 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.137 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:12.137 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:12.137 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:12.137 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:12.137 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.137 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.137 [2024-11-21 03:18:59.457809] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:12.137 [2024-11-21 03:18:59.457976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:12.137 [2024-11-21 03:18:59.458045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:12.137 [2024-11-21 03:18:59.460491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:12.137 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.137 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:12.137 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.137 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.137 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.137 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.137 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.137 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.137 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.137 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.137 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.137 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.137 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.137 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.137 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.137 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.137 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.137 "name": "Existed_Raid", 00:10:12.137 "uuid": "1257d66a-dae0-4ebb-8bd9-0169c1f27b39", 00:10:12.137 "strip_size_kb": 0, 00:10:12.137 "state": "configuring", 00:10:12.137 "raid_level": "raid1", 00:10:12.137 "superblock": true, 00:10:12.137 "num_base_bdevs": 3, 00:10:12.137 "num_base_bdevs_discovered": 2, 00:10:12.137 "num_base_bdevs_operational": 3, 00:10:12.137 "base_bdevs_list": [ 00:10:12.137 { 00:10:12.137 "name": "BaseBdev1", 00:10:12.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.137 "is_configured": false, 00:10:12.137 "data_offset": 0, 00:10:12.137 "data_size": 0 00:10:12.137 }, 00:10:12.137 { 00:10:12.137 "name": "BaseBdev2", 00:10:12.137 "uuid": "fcf01b7a-092d-4bd9-87a9-dfe51157b8b1", 00:10:12.137 "is_configured": true, 00:10:12.137 "data_offset": 2048, 00:10:12.137 "data_size": 63488 00:10:12.137 }, 00:10:12.137 { 00:10:12.137 "name": "BaseBdev3", 00:10:12.137 "uuid": "f0857481-3821-4e70-b58b-5c93d940a122", 00:10:12.137 "is_configured": true, 00:10:12.137 "data_offset": 2048, 00:10:12.137 "data_size": 63488 00:10:12.137 } 00:10:12.137 ] 00:10:12.137 }' 00:10:12.137 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.137 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.396 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:12.396 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.396 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.396 [2024-11-21 03:18:59.913872] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:12.396 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.396 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:12.396 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.396 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.396 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.396 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.396 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.396 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.396 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.396 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.396 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.396 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.396 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.396 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.396 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.396 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.655 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.655 "name": "Existed_Raid", 00:10:12.655 "uuid": "1257d66a-dae0-4ebb-8bd9-0169c1f27b39", 00:10:12.655 "strip_size_kb": 0, 00:10:12.655 "state": "configuring", 00:10:12.655 "raid_level": "raid1", 00:10:12.655 "superblock": true, 00:10:12.655 "num_base_bdevs": 3, 00:10:12.655 "num_base_bdevs_discovered": 1, 00:10:12.655 "num_base_bdevs_operational": 3, 00:10:12.655 "base_bdevs_list": [ 00:10:12.655 { 00:10:12.655 "name": "BaseBdev1", 00:10:12.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.655 "is_configured": false, 00:10:12.655 "data_offset": 0, 00:10:12.655 "data_size": 0 00:10:12.655 }, 00:10:12.655 { 00:10:12.655 "name": null, 00:10:12.655 "uuid": "fcf01b7a-092d-4bd9-87a9-dfe51157b8b1", 00:10:12.655 "is_configured": false, 00:10:12.655 "data_offset": 0, 00:10:12.655 "data_size": 63488 00:10:12.656 }, 00:10:12.656 { 00:10:12.656 "name": "BaseBdev3", 00:10:12.656 "uuid": "f0857481-3821-4e70-b58b-5c93d940a122", 00:10:12.656 "is_configured": true, 00:10:12.656 "data_offset": 2048, 00:10:12.656 "data_size": 63488 00:10:12.656 } 00:10:12.656 ] 00:10:12.656 }' 00:10:12.656 03:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.656 03:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.916 03:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:12.916 03:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.916 03:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.916 03:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.916 03:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.916 03:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:12.916 03:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:12.916 03:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.916 03:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.916 [2024-11-21 03:19:00.455711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:12.916 BaseBdev1 00:10:12.916 03:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.916 03:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:12.916 03:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:12.916 03:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:12.916 03:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:12.916 03:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:12.916 03:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:12.916 03:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:12.916 03:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.916 03:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.916 03:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.916 03:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:12.916 03:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.916 03:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.176 [ 00:10:13.176 { 00:10:13.176 "name": "BaseBdev1", 00:10:13.176 "aliases": [ 00:10:13.176 "0cc2dbf3-6115-468d-a246-08644e7cf273" 00:10:13.176 ], 00:10:13.176 "product_name": "Malloc disk", 00:10:13.176 "block_size": 512, 00:10:13.176 "num_blocks": 65536, 00:10:13.176 "uuid": "0cc2dbf3-6115-468d-a246-08644e7cf273", 00:10:13.176 "assigned_rate_limits": { 00:10:13.176 "rw_ios_per_sec": 0, 00:10:13.176 "rw_mbytes_per_sec": 0, 00:10:13.176 "r_mbytes_per_sec": 0, 00:10:13.176 "w_mbytes_per_sec": 0 00:10:13.176 }, 00:10:13.176 "claimed": true, 00:10:13.176 "claim_type": "exclusive_write", 00:10:13.176 "zoned": false, 00:10:13.176 "supported_io_types": { 00:10:13.176 "read": true, 00:10:13.176 "write": true, 00:10:13.176 "unmap": true, 00:10:13.176 "flush": true, 00:10:13.176 "reset": true, 00:10:13.176 "nvme_admin": false, 00:10:13.176 "nvme_io": false, 00:10:13.176 "nvme_io_md": false, 00:10:13.176 "write_zeroes": true, 00:10:13.176 "zcopy": true, 00:10:13.176 "get_zone_info": false, 00:10:13.176 "zone_management": false, 00:10:13.176 "zone_append": false, 00:10:13.176 "compare": false, 00:10:13.176 "compare_and_write": false, 00:10:13.176 "abort": true, 00:10:13.176 "seek_hole": false, 00:10:13.176 "seek_data": false, 00:10:13.176 "copy": true, 00:10:13.176 "nvme_iov_md": false 00:10:13.176 }, 00:10:13.176 "memory_domains": [ 00:10:13.176 { 00:10:13.177 "dma_device_id": "system", 00:10:13.177 "dma_device_type": 1 00:10:13.177 }, 00:10:13.177 { 00:10:13.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.177 "dma_device_type": 2 00:10:13.177 } 00:10:13.177 ], 00:10:13.177 "driver_specific": {} 00:10:13.177 } 00:10:13.177 ] 00:10:13.177 03:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.177 03:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:13.177 03:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:13.177 03:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.177 03:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.177 03:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.177 03:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.177 03:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.177 03:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.177 03:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.177 03:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.177 03:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.177 03:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.177 03:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.177 03:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.177 03:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.177 03:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.177 03:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.177 "name": "Existed_Raid", 00:10:13.177 "uuid": "1257d66a-dae0-4ebb-8bd9-0169c1f27b39", 00:10:13.177 "strip_size_kb": 0, 00:10:13.177 "state": "configuring", 00:10:13.177 "raid_level": "raid1", 00:10:13.177 "superblock": true, 00:10:13.177 "num_base_bdevs": 3, 00:10:13.177 "num_base_bdevs_discovered": 2, 00:10:13.177 "num_base_bdevs_operational": 3, 00:10:13.177 "base_bdevs_list": [ 00:10:13.177 { 00:10:13.177 "name": "BaseBdev1", 00:10:13.177 "uuid": "0cc2dbf3-6115-468d-a246-08644e7cf273", 00:10:13.177 "is_configured": true, 00:10:13.177 "data_offset": 2048, 00:10:13.177 "data_size": 63488 00:10:13.177 }, 00:10:13.177 { 00:10:13.177 "name": null, 00:10:13.177 "uuid": "fcf01b7a-092d-4bd9-87a9-dfe51157b8b1", 00:10:13.177 "is_configured": false, 00:10:13.177 "data_offset": 0, 00:10:13.177 "data_size": 63488 00:10:13.177 }, 00:10:13.177 { 00:10:13.177 "name": "BaseBdev3", 00:10:13.177 "uuid": "f0857481-3821-4e70-b58b-5c93d940a122", 00:10:13.177 "is_configured": true, 00:10:13.177 "data_offset": 2048, 00:10:13.177 "data_size": 63488 00:10:13.177 } 00:10:13.177 ] 00:10:13.177 }' 00:10:13.177 03:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.177 03:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.437 03:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.437 03:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.437 03:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.437 03:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:13.437 03:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.437 03:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:13.437 03:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:13.437 03:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.437 03:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.437 [2024-11-21 03:19:00.995938] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:13.697 03:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.697 03:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:13.697 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.697 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.697 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.697 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.697 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.697 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.697 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.697 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.697 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.697 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.697 03:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.697 03:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.697 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.697 03:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.697 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.697 "name": "Existed_Raid", 00:10:13.697 "uuid": "1257d66a-dae0-4ebb-8bd9-0169c1f27b39", 00:10:13.697 "strip_size_kb": 0, 00:10:13.697 "state": "configuring", 00:10:13.697 "raid_level": "raid1", 00:10:13.697 "superblock": true, 00:10:13.697 "num_base_bdevs": 3, 00:10:13.697 "num_base_bdevs_discovered": 1, 00:10:13.697 "num_base_bdevs_operational": 3, 00:10:13.697 "base_bdevs_list": [ 00:10:13.697 { 00:10:13.697 "name": "BaseBdev1", 00:10:13.697 "uuid": "0cc2dbf3-6115-468d-a246-08644e7cf273", 00:10:13.697 "is_configured": true, 00:10:13.697 "data_offset": 2048, 00:10:13.697 "data_size": 63488 00:10:13.697 }, 00:10:13.697 { 00:10:13.697 "name": null, 00:10:13.697 "uuid": "fcf01b7a-092d-4bd9-87a9-dfe51157b8b1", 00:10:13.697 "is_configured": false, 00:10:13.697 "data_offset": 0, 00:10:13.697 "data_size": 63488 00:10:13.697 }, 00:10:13.697 { 00:10:13.697 "name": null, 00:10:13.697 "uuid": "f0857481-3821-4e70-b58b-5c93d940a122", 00:10:13.697 "is_configured": false, 00:10:13.697 "data_offset": 0, 00:10:13.697 "data_size": 63488 00:10:13.697 } 00:10:13.697 ] 00:10:13.697 }' 00:10:13.697 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.697 03:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.957 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:13.957 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.957 03:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.957 03:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.957 03:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.957 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:13.957 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:13.957 03:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.957 03:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.957 [2024-11-21 03:19:01.468125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:13.957 03:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.957 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:13.957 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.957 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.957 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.957 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.957 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.957 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.957 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.957 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.957 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.957 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.957 03:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.957 03:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.957 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.957 03:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.217 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.217 "name": "Existed_Raid", 00:10:14.217 "uuid": "1257d66a-dae0-4ebb-8bd9-0169c1f27b39", 00:10:14.217 "strip_size_kb": 0, 00:10:14.217 "state": "configuring", 00:10:14.217 "raid_level": "raid1", 00:10:14.217 "superblock": true, 00:10:14.217 "num_base_bdevs": 3, 00:10:14.217 "num_base_bdevs_discovered": 2, 00:10:14.217 "num_base_bdevs_operational": 3, 00:10:14.217 "base_bdevs_list": [ 00:10:14.217 { 00:10:14.217 "name": "BaseBdev1", 00:10:14.217 "uuid": "0cc2dbf3-6115-468d-a246-08644e7cf273", 00:10:14.217 "is_configured": true, 00:10:14.217 "data_offset": 2048, 00:10:14.217 "data_size": 63488 00:10:14.217 }, 00:10:14.217 { 00:10:14.217 "name": null, 00:10:14.217 "uuid": "fcf01b7a-092d-4bd9-87a9-dfe51157b8b1", 00:10:14.217 "is_configured": false, 00:10:14.217 "data_offset": 0, 00:10:14.217 "data_size": 63488 00:10:14.217 }, 00:10:14.217 { 00:10:14.217 "name": "BaseBdev3", 00:10:14.217 "uuid": "f0857481-3821-4e70-b58b-5c93d940a122", 00:10:14.217 "is_configured": true, 00:10:14.217 "data_offset": 2048, 00:10:14.217 "data_size": 63488 00:10:14.217 } 00:10:14.217 ] 00:10:14.217 }' 00:10:14.217 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.217 03:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.477 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.477 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:14.477 03:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.477 03:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.477 03:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.477 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:14.477 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:14.477 03:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.477 03:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.477 [2024-11-21 03:19:01.964280] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:14.477 03:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.477 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:14.477 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.477 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.477 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.477 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.477 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.477 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.477 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.477 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.477 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.477 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.477 03:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.477 03:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.477 03:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.477 03:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.477 03:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.477 "name": "Existed_Raid", 00:10:14.477 "uuid": "1257d66a-dae0-4ebb-8bd9-0169c1f27b39", 00:10:14.477 "strip_size_kb": 0, 00:10:14.477 "state": "configuring", 00:10:14.477 "raid_level": "raid1", 00:10:14.477 "superblock": true, 00:10:14.477 "num_base_bdevs": 3, 00:10:14.477 "num_base_bdevs_discovered": 1, 00:10:14.477 "num_base_bdevs_operational": 3, 00:10:14.477 "base_bdevs_list": [ 00:10:14.477 { 00:10:14.477 "name": null, 00:10:14.477 "uuid": "0cc2dbf3-6115-468d-a246-08644e7cf273", 00:10:14.477 "is_configured": false, 00:10:14.477 "data_offset": 0, 00:10:14.477 "data_size": 63488 00:10:14.477 }, 00:10:14.477 { 00:10:14.477 "name": null, 00:10:14.477 "uuid": "fcf01b7a-092d-4bd9-87a9-dfe51157b8b1", 00:10:14.477 "is_configured": false, 00:10:14.477 "data_offset": 0, 00:10:14.477 "data_size": 63488 00:10:14.477 }, 00:10:14.477 { 00:10:14.477 "name": "BaseBdev3", 00:10:14.477 "uuid": "f0857481-3821-4e70-b58b-5c93d940a122", 00:10:14.477 "is_configured": true, 00:10:14.477 "data_offset": 2048, 00:10:14.477 "data_size": 63488 00:10:14.477 } 00:10:14.477 ] 00:10:14.477 }' 00:10:14.477 03:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.477 03:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.045 03:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.045 03:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:15.045 03:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.045 03:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.045 03:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.045 03:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:15.046 03:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:15.046 03:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.046 03:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.046 [2024-11-21 03:19:02.524310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:15.046 03:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.046 03:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:15.046 03:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.046 03:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.046 03:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.046 03:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.046 03:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.046 03:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.046 03:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.046 03:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.046 03:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.046 03:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.046 03:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.046 03:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.046 03:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.046 03:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.046 03:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.046 "name": "Existed_Raid", 00:10:15.046 "uuid": "1257d66a-dae0-4ebb-8bd9-0169c1f27b39", 00:10:15.046 "strip_size_kb": 0, 00:10:15.046 "state": "configuring", 00:10:15.046 "raid_level": "raid1", 00:10:15.046 "superblock": true, 00:10:15.046 "num_base_bdevs": 3, 00:10:15.046 "num_base_bdevs_discovered": 2, 00:10:15.046 "num_base_bdevs_operational": 3, 00:10:15.046 "base_bdevs_list": [ 00:10:15.046 { 00:10:15.046 "name": null, 00:10:15.046 "uuid": "0cc2dbf3-6115-468d-a246-08644e7cf273", 00:10:15.046 "is_configured": false, 00:10:15.046 "data_offset": 0, 00:10:15.046 "data_size": 63488 00:10:15.046 }, 00:10:15.046 { 00:10:15.046 "name": "BaseBdev2", 00:10:15.046 "uuid": "fcf01b7a-092d-4bd9-87a9-dfe51157b8b1", 00:10:15.046 "is_configured": true, 00:10:15.046 "data_offset": 2048, 00:10:15.046 "data_size": 63488 00:10:15.046 }, 00:10:15.046 { 00:10:15.046 "name": "BaseBdev3", 00:10:15.046 "uuid": "f0857481-3821-4e70-b58b-5c93d940a122", 00:10:15.046 "is_configured": true, 00:10:15.046 "data_offset": 2048, 00:10:15.046 "data_size": 63488 00:10:15.046 } 00:10:15.046 ] 00:10:15.046 }' 00:10:15.046 03:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.046 03:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.613 03:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.613 03:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:15.613 03:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.613 03:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.613 03:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.613 03:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:15.613 03:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.613 03:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.613 03:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.613 03:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:15.613 03:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.613 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0cc2dbf3-6115-468d-a246-08644e7cf273 00:10:15.613 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.614 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.614 [2024-11-21 03:19:03.037309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:15.614 [2024-11-21 03:19:03.037515] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:15.614 [2024-11-21 03:19:03.037535] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:15.614 [2024-11-21 03:19:03.037797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:10:15.614 [2024-11-21 03:19:03.037958] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:15.614 [2024-11-21 03:19:03.037969] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:15.614 [2024-11-21 03:19:03.038100] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.614 NewBaseBdev 00:10:15.614 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.614 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:15.614 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:15.614 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:15.614 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:15.614 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:15.614 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:15.614 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:15.614 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.614 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.614 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.614 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:15.614 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.614 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.614 [ 00:10:15.614 { 00:10:15.614 "name": "NewBaseBdev", 00:10:15.614 "aliases": [ 00:10:15.614 "0cc2dbf3-6115-468d-a246-08644e7cf273" 00:10:15.614 ], 00:10:15.614 "product_name": "Malloc disk", 00:10:15.614 "block_size": 512, 00:10:15.614 "num_blocks": 65536, 00:10:15.614 "uuid": "0cc2dbf3-6115-468d-a246-08644e7cf273", 00:10:15.614 "assigned_rate_limits": { 00:10:15.614 "rw_ios_per_sec": 0, 00:10:15.614 "rw_mbytes_per_sec": 0, 00:10:15.614 "r_mbytes_per_sec": 0, 00:10:15.614 "w_mbytes_per_sec": 0 00:10:15.614 }, 00:10:15.614 "claimed": true, 00:10:15.614 "claim_type": "exclusive_write", 00:10:15.614 "zoned": false, 00:10:15.614 "supported_io_types": { 00:10:15.614 "read": true, 00:10:15.614 "write": true, 00:10:15.614 "unmap": true, 00:10:15.614 "flush": true, 00:10:15.614 "reset": true, 00:10:15.614 "nvme_admin": false, 00:10:15.614 "nvme_io": false, 00:10:15.614 "nvme_io_md": false, 00:10:15.614 "write_zeroes": true, 00:10:15.614 "zcopy": true, 00:10:15.614 "get_zone_info": false, 00:10:15.614 "zone_management": false, 00:10:15.614 "zone_append": false, 00:10:15.614 "compare": false, 00:10:15.614 "compare_and_write": false, 00:10:15.614 "abort": true, 00:10:15.614 "seek_hole": false, 00:10:15.614 "seek_data": false, 00:10:15.614 "copy": true, 00:10:15.614 "nvme_iov_md": false 00:10:15.614 }, 00:10:15.614 "memory_domains": [ 00:10:15.614 { 00:10:15.614 "dma_device_id": "system", 00:10:15.614 "dma_device_type": 1 00:10:15.614 }, 00:10:15.614 { 00:10:15.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.614 "dma_device_type": 2 00:10:15.614 } 00:10:15.614 ], 00:10:15.614 "driver_specific": {} 00:10:15.614 } 00:10:15.614 ] 00:10:15.614 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.614 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:15.614 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:15.614 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.614 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.614 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.614 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.614 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.614 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.614 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.614 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.614 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.614 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.614 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.614 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.614 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.614 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.614 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.614 "name": "Existed_Raid", 00:10:15.614 "uuid": "1257d66a-dae0-4ebb-8bd9-0169c1f27b39", 00:10:15.614 "strip_size_kb": 0, 00:10:15.614 "state": "online", 00:10:15.614 "raid_level": "raid1", 00:10:15.614 "superblock": true, 00:10:15.614 "num_base_bdevs": 3, 00:10:15.614 "num_base_bdevs_discovered": 3, 00:10:15.614 "num_base_bdevs_operational": 3, 00:10:15.614 "base_bdevs_list": [ 00:10:15.614 { 00:10:15.614 "name": "NewBaseBdev", 00:10:15.614 "uuid": "0cc2dbf3-6115-468d-a246-08644e7cf273", 00:10:15.614 "is_configured": true, 00:10:15.614 "data_offset": 2048, 00:10:15.614 "data_size": 63488 00:10:15.614 }, 00:10:15.614 { 00:10:15.614 "name": "BaseBdev2", 00:10:15.614 "uuid": "fcf01b7a-092d-4bd9-87a9-dfe51157b8b1", 00:10:15.614 "is_configured": true, 00:10:15.614 "data_offset": 2048, 00:10:15.614 "data_size": 63488 00:10:15.614 }, 00:10:15.614 { 00:10:15.614 "name": "BaseBdev3", 00:10:15.614 "uuid": "f0857481-3821-4e70-b58b-5c93d940a122", 00:10:15.614 "is_configured": true, 00:10:15.614 "data_offset": 2048, 00:10:15.614 "data_size": 63488 00:10:15.614 } 00:10:15.614 ] 00:10:15.614 }' 00:10:15.614 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.615 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.182 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:16.182 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:16.182 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:16.182 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:16.182 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:16.182 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:16.182 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:16.182 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:16.182 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.182 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.182 [2024-11-21 03:19:03.537862] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:16.182 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.182 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:16.182 "name": "Existed_Raid", 00:10:16.182 "aliases": [ 00:10:16.182 "1257d66a-dae0-4ebb-8bd9-0169c1f27b39" 00:10:16.182 ], 00:10:16.182 "product_name": "Raid Volume", 00:10:16.182 "block_size": 512, 00:10:16.182 "num_blocks": 63488, 00:10:16.182 "uuid": "1257d66a-dae0-4ebb-8bd9-0169c1f27b39", 00:10:16.182 "assigned_rate_limits": { 00:10:16.182 "rw_ios_per_sec": 0, 00:10:16.182 "rw_mbytes_per_sec": 0, 00:10:16.182 "r_mbytes_per_sec": 0, 00:10:16.182 "w_mbytes_per_sec": 0 00:10:16.182 }, 00:10:16.182 "claimed": false, 00:10:16.182 "zoned": false, 00:10:16.182 "supported_io_types": { 00:10:16.182 "read": true, 00:10:16.182 "write": true, 00:10:16.182 "unmap": false, 00:10:16.182 "flush": false, 00:10:16.182 "reset": true, 00:10:16.182 "nvme_admin": false, 00:10:16.182 "nvme_io": false, 00:10:16.182 "nvme_io_md": false, 00:10:16.182 "write_zeroes": true, 00:10:16.182 "zcopy": false, 00:10:16.182 "get_zone_info": false, 00:10:16.182 "zone_management": false, 00:10:16.182 "zone_append": false, 00:10:16.182 "compare": false, 00:10:16.182 "compare_and_write": false, 00:10:16.182 "abort": false, 00:10:16.182 "seek_hole": false, 00:10:16.182 "seek_data": false, 00:10:16.182 "copy": false, 00:10:16.182 "nvme_iov_md": false 00:10:16.182 }, 00:10:16.182 "memory_domains": [ 00:10:16.182 { 00:10:16.182 "dma_device_id": "system", 00:10:16.182 "dma_device_type": 1 00:10:16.182 }, 00:10:16.182 { 00:10:16.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.182 "dma_device_type": 2 00:10:16.182 }, 00:10:16.182 { 00:10:16.182 "dma_device_id": "system", 00:10:16.182 "dma_device_type": 1 00:10:16.182 }, 00:10:16.182 { 00:10:16.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.182 "dma_device_type": 2 00:10:16.182 }, 00:10:16.182 { 00:10:16.182 "dma_device_id": "system", 00:10:16.182 "dma_device_type": 1 00:10:16.182 }, 00:10:16.182 { 00:10:16.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.182 "dma_device_type": 2 00:10:16.182 } 00:10:16.182 ], 00:10:16.182 "driver_specific": { 00:10:16.182 "raid": { 00:10:16.182 "uuid": "1257d66a-dae0-4ebb-8bd9-0169c1f27b39", 00:10:16.182 "strip_size_kb": 0, 00:10:16.182 "state": "online", 00:10:16.182 "raid_level": "raid1", 00:10:16.182 "superblock": true, 00:10:16.182 "num_base_bdevs": 3, 00:10:16.182 "num_base_bdevs_discovered": 3, 00:10:16.182 "num_base_bdevs_operational": 3, 00:10:16.182 "base_bdevs_list": [ 00:10:16.182 { 00:10:16.182 "name": "NewBaseBdev", 00:10:16.182 "uuid": "0cc2dbf3-6115-468d-a246-08644e7cf273", 00:10:16.182 "is_configured": true, 00:10:16.182 "data_offset": 2048, 00:10:16.182 "data_size": 63488 00:10:16.182 }, 00:10:16.182 { 00:10:16.182 "name": "BaseBdev2", 00:10:16.182 "uuid": "fcf01b7a-092d-4bd9-87a9-dfe51157b8b1", 00:10:16.182 "is_configured": true, 00:10:16.182 "data_offset": 2048, 00:10:16.182 "data_size": 63488 00:10:16.182 }, 00:10:16.183 { 00:10:16.183 "name": "BaseBdev3", 00:10:16.183 "uuid": "f0857481-3821-4e70-b58b-5c93d940a122", 00:10:16.183 "is_configured": true, 00:10:16.183 "data_offset": 2048, 00:10:16.183 "data_size": 63488 00:10:16.183 } 00:10:16.183 ] 00:10:16.183 } 00:10:16.183 } 00:10:16.183 }' 00:10:16.183 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:16.183 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:16.183 BaseBdev2 00:10:16.183 BaseBdev3' 00:10:16.183 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.183 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:16.183 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.183 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:16.183 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.183 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.183 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.183 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.183 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.183 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.183 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.183 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.183 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:16.183 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.183 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.183 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.183 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.183 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.183 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.442 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:16.442 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.442 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.442 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.442 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.442 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.442 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.442 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:16.442 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.442 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.442 [2024-11-21 03:19:03.801558] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:16.442 [2024-11-21 03:19:03.801636] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:16.442 [2024-11-21 03:19:03.801744] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:16.442 [2024-11-21 03:19:03.802088] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:16.442 [2024-11-21 03:19:03.802151] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:16.442 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.442 03:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81094 00:10:16.442 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81094 ']' 00:10:16.442 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 81094 00:10:16.442 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:16.442 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:16.442 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81094 00:10:16.442 killing process with pid 81094 00:10:16.442 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:16.442 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:16.442 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81094' 00:10:16.442 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 81094 00:10:16.442 [2024-11-21 03:19:03.851348] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:16.442 03:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 81094 00:10:16.442 [2024-11-21 03:19:03.909923] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:16.701 03:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:16.701 00:10:16.701 real 0m9.205s 00:10:16.701 user 0m15.404s 00:10:16.701 sys 0m2.030s 00:10:16.701 ************************************ 00:10:16.701 END TEST raid_state_function_test_sb 00:10:16.701 ************************************ 00:10:16.701 03:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.701 03:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.960 03:19:04 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:16.961 03:19:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:16.961 03:19:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.961 03:19:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:16.961 ************************************ 00:10:16.961 START TEST raid_superblock_test 00:10:16.961 ************************************ 00:10:16.961 03:19:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:10:16.961 03:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:16.961 03:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:16.961 03:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:16.961 03:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:16.961 03:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:16.961 03:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:16.961 03:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:16.961 03:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:16.961 03:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:16.961 03:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:16.961 03:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:16.961 03:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:16.961 03:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:16.961 03:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:16.961 03:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:16.961 03:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81702 00:10:16.961 03:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:16.961 03:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81702 00:10:16.961 03:19:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81702 ']' 00:10:16.961 03:19:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.961 03:19:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.961 03:19:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.961 03:19:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.961 03:19:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.961 [2024-11-21 03:19:04.410814] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:10:16.961 [2024-11-21 03:19:04.411056] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81702 ] 00:10:17.220 [2024-11-21 03:19:04.551974] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:17.220 [2024-11-21 03:19:04.587344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.220 [2024-11-21 03:19:04.633325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.220 [2024-11-21 03:19:04.713402] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:17.220 [2024-11-21 03:19:04.713561] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.789 malloc1 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.789 [2024-11-21 03:19:05.268230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:17.789 [2024-11-21 03:19:05.268419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.789 [2024-11-21 03:19:05.268481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:17.789 [2024-11-21 03:19:05.268522] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.789 [2024-11-21 03:19:05.271081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.789 [2024-11-21 03:19:05.271209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:17.789 pt1 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.789 malloc2 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.789 [2024-11-21 03:19:05.301749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:17.789 [2024-11-21 03:19:05.301838] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.789 [2024-11-21 03:19:05.301862] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:17.789 [2024-11-21 03:19:05.301873] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.789 [2024-11-21 03:19:05.304461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.789 [2024-11-21 03:19:05.304510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:17.789 pt2 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.789 malloc3 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.789 03:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.789 [2024-11-21 03:19:05.331391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:17.789 [2024-11-21 03:19:05.331554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.789 [2024-11-21 03:19:05.331599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:17.790 [2024-11-21 03:19:05.331637] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.790 [2024-11-21 03:19:05.334132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.790 [2024-11-21 03:19:05.334224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:17.790 pt3 00:10:17.790 03:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.790 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:17.790 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:17.790 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:17.790 03:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.790 03:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.790 [2024-11-21 03:19:05.343435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:17.790 [2024-11-21 03:19:05.345682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:17.790 [2024-11-21 03:19:05.345827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:17.790 [2024-11-21 03:19:05.346061] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:10:17.790 [2024-11-21 03:19:05.346126] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:17.790 [2024-11-21 03:19:05.346532] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:17.790 [2024-11-21 03:19:05.346793] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:10:17.790 [2024-11-21 03:19:05.346847] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:10:17.790 [2024-11-21 03:19:05.347086] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.790 03:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.790 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:17.790 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.790 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.790 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.790 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.790 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.790 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.790 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.790 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.790 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.055 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.055 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.055 03:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.055 03:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.055 03:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.055 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.055 "name": "raid_bdev1", 00:10:18.055 "uuid": "7066d564-3d72-4b6b-83aa-bf8b48ac4899", 00:10:18.055 "strip_size_kb": 0, 00:10:18.055 "state": "online", 00:10:18.055 "raid_level": "raid1", 00:10:18.055 "superblock": true, 00:10:18.055 "num_base_bdevs": 3, 00:10:18.055 "num_base_bdevs_discovered": 3, 00:10:18.055 "num_base_bdevs_operational": 3, 00:10:18.055 "base_bdevs_list": [ 00:10:18.055 { 00:10:18.055 "name": "pt1", 00:10:18.055 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:18.055 "is_configured": true, 00:10:18.055 "data_offset": 2048, 00:10:18.055 "data_size": 63488 00:10:18.055 }, 00:10:18.055 { 00:10:18.055 "name": "pt2", 00:10:18.055 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.055 "is_configured": true, 00:10:18.055 "data_offset": 2048, 00:10:18.055 "data_size": 63488 00:10:18.055 }, 00:10:18.055 { 00:10:18.055 "name": "pt3", 00:10:18.055 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.055 "is_configured": true, 00:10:18.055 "data_offset": 2048, 00:10:18.055 "data_size": 63488 00:10:18.055 } 00:10:18.055 ] 00:10:18.055 }' 00:10:18.055 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.055 03:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.325 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:18.325 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:18.325 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:18.325 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:18.325 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:18.325 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:18.325 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:18.325 03:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.325 03:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.325 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:18.325 [2024-11-21 03:19:05.835890] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:18.325 03:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.325 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:18.325 "name": "raid_bdev1", 00:10:18.325 "aliases": [ 00:10:18.325 "7066d564-3d72-4b6b-83aa-bf8b48ac4899" 00:10:18.325 ], 00:10:18.325 "product_name": "Raid Volume", 00:10:18.325 "block_size": 512, 00:10:18.325 "num_blocks": 63488, 00:10:18.325 "uuid": "7066d564-3d72-4b6b-83aa-bf8b48ac4899", 00:10:18.325 "assigned_rate_limits": { 00:10:18.325 "rw_ios_per_sec": 0, 00:10:18.325 "rw_mbytes_per_sec": 0, 00:10:18.325 "r_mbytes_per_sec": 0, 00:10:18.325 "w_mbytes_per_sec": 0 00:10:18.325 }, 00:10:18.325 "claimed": false, 00:10:18.325 "zoned": false, 00:10:18.325 "supported_io_types": { 00:10:18.325 "read": true, 00:10:18.325 "write": true, 00:10:18.325 "unmap": false, 00:10:18.325 "flush": false, 00:10:18.325 "reset": true, 00:10:18.325 "nvme_admin": false, 00:10:18.325 "nvme_io": false, 00:10:18.325 "nvme_io_md": false, 00:10:18.325 "write_zeroes": true, 00:10:18.325 "zcopy": false, 00:10:18.325 "get_zone_info": false, 00:10:18.325 "zone_management": false, 00:10:18.325 "zone_append": false, 00:10:18.325 "compare": false, 00:10:18.325 "compare_and_write": false, 00:10:18.325 "abort": false, 00:10:18.325 "seek_hole": false, 00:10:18.325 "seek_data": false, 00:10:18.325 "copy": false, 00:10:18.325 "nvme_iov_md": false 00:10:18.325 }, 00:10:18.325 "memory_domains": [ 00:10:18.325 { 00:10:18.325 "dma_device_id": "system", 00:10:18.325 "dma_device_type": 1 00:10:18.325 }, 00:10:18.325 { 00:10:18.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.325 "dma_device_type": 2 00:10:18.325 }, 00:10:18.325 { 00:10:18.325 "dma_device_id": "system", 00:10:18.325 "dma_device_type": 1 00:10:18.325 }, 00:10:18.325 { 00:10:18.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.325 "dma_device_type": 2 00:10:18.325 }, 00:10:18.325 { 00:10:18.325 "dma_device_id": "system", 00:10:18.325 "dma_device_type": 1 00:10:18.325 }, 00:10:18.325 { 00:10:18.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.325 "dma_device_type": 2 00:10:18.325 } 00:10:18.325 ], 00:10:18.325 "driver_specific": { 00:10:18.325 "raid": { 00:10:18.325 "uuid": "7066d564-3d72-4b6b-83aa-bf8b48ac4899", 00:10:18.325 "strip_size_kb": 0, 00:10:18.325 "state": "online", 00:10:18.325 "raid_level": "raid1", 00:10:18.325 "superblock": true, 00:10:18.325 "num_base_bdevs": 3, 00:10:18.325 "num_base_bdevs_discovered": 3, 00:10:18.325 "num_base_bdevs_operational": 3, 00:10:18.325 "base_bdevs_list": [ 00:10:18.325 { 00:10:18.325 "name": "pt1", 00:10:18.325 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:18.325 "is_configured": true, 00:10:18.325 "data_offset": 2048, 00:10:18.325 "data_size": 63488 00:10:18.325 }, 00:10:18.325 { 00:10:18.325 "name": "pt2", 00:10:18.325 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.325 "is_configured": true, 00:10:18.325 "data_offset": 2048, 00:10:18.325 "data_size": 63488 00:10:18.325 }, 00:10:18.325 { 00:10:18.325 "name": "pt3", 00:10:18.325 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.325 "is_configured": true, 00:10:18.325 "data_offset": 2048, 00:10:18.325 "data_size": 63488 00:10:18.325 } 00:10:18.325 ] 00:10:18.325 } 00:10:18.325 } 00:10:18.325 }' 00:10:18.325 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:18.586 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:18.586 pt2 00:10:18.586 pt3' 00:10:18.586 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.586 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:18.586 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.586 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:18.586 03:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.586 03:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.586 03:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.586 03:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.586 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.586 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.586 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.586 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.586 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:18.586 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.586 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.586 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.586 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.586 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.586 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.586 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:18.586 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.586 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.586 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.586 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.586 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.586 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.586 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:18.586 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:18.586 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.586 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.586 [2024-11-21 03:19:06.115978] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:18.586 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.586 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7066d564-3d72-4b6b-83aa-bf8b48ac4899 00:10:18.586 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7066d564-3d72-4b6b-83aa-bf8b48ac4899 ']' 00:10:18.586 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:18.586 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.586 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.586 [2024-11-21 03:19:06.147623] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:18.586 [2024-11-21 03:19:06.147671] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:18.586 [2024-11-21 03:19:06.147777] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:18.586 [2024-11-21 03:19:06.147867] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:18.586 [2024-11-21 03:19:06.147880] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.848 [2024-11-21 03:19:06.311760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:18.848 [2024-11-21 03:19:06.314303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:18.848 [2024-11-21 03:19:06.314443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:18.848 [2024-11-21 03:19:06.314535] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:18.848 [2024-11-21 03:19:06.314668] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:18.848 [2024-11-21 03:19:06.314698] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:18.848 [2024-11-21 03:19:06.314716] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:18.848 [2024-11-21 03:19:06.314727] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:10:18.848 request: 00:10:18.848 { 00:10:18.848 "name": "raid_bdev1", 00:10:18.848 "raid_level": "raid1", 00:10:18.848 "base_bdevs": [ 00:10:18.848 "malloc1", 00:10:18.848 "malloc2", 00:10:18.848 "malloc3" 00:10:18.848 ], 00:10:18.848 "superblock": false, 00:10:18.848 "method": "bdev_raid_create", 00:10:18.848 "req_id": 1 00:10:18.848 } 00:10:18.848 Got JSON-RPC error response 00:10:18.848 response: 00:10:18.848 { 00:10:18.848 "code": -17, 00:10:18.848 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:18.848 } 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.848 [2024-11-21 03:19:06.375709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:18.848 [2024-11-21 03:19:06.375897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.848 [2024-11-21 03:19:06.375972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:18.848 [2024-11-21 03:19:06.376012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.848 [2024-11-21 03:19:06.378686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.848 [2024-11-21 03:19:06.378802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:18.848 [2024-11-21 03:19:06.378952] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:18.848 [2024-11-21 03:19:06.379078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:18.848 pt1 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.848 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.849 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.849 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.849 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.849 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.849 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.849 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.849 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.849 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.849 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.108 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.108 "name": "raid_bdev1", 00:10:19.108 "uuid": "7066d564-3d72-4b6b-83aa-bf8b48ac4899", 00:10:19.108 "strip_size_kb": 0, 00:10:19.108 "state": "configuring", 00:10:19.108 "raid_level": "raid1", 00:10:19.108 "superblock": true, 00:10:19.108 "num_base_bdevs": 3, 00:10:19.108 "num_base_bdevs_discovered": 1, 00:10:19.108 "num_base_bdevs_operational": 3, 00:10:19.108 "base_bdevs_list": [ 00:10:19.108 { 00:10:19.108 "name": "pt1", 00:10:19.108 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:19.109 "is_configured": true, 00:10:19.109 "data_offset": 2048, 00:10:19.109 "data_size": 63488 00:10:19.109 }, 00:10:19.109 { 00:10:19.109 "name": null, 00:10:19.109 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.109 "is_configured": false, 00:10:19.109 "data_offset": 2048, 00:10:19.109 "data_size": 63488 00:10:19.109 }, 00:10:19.109 { 00:10:19.109 "name": null, 00:10:19.109 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.109 "is_configured": false, 00:10:19.109 "data_offset": 2048, 00:10:19.109 "data_size": 63488 00:10:19.109 } 00:10:19.109 ] 00:10:19.109 }' 00:10:19.109 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.109 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.369 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:19.369 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:19.369 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.369 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.369 [2024-11-21 03:19:06.787855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:19.369 [2024-11-21 03:19:06.787962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.369 [2024-11-21 03:19:06.787993] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:19.369 [2024-11-21 03:19:06.788005] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.369 [2024-11-21 03:19:06.788513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.369 [2024-11-21 03:19:06.788566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:19.369 [2024-11-21 03:19:06.788660] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:19.369 [2024-11-21 03:19:06.788685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:19.369 pt2 00:10:19.369 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.369 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:19.369 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.369 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.369 [2024-11-21 03:19:06.799931] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:19.369 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.369 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:19.369 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.369 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.370 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.370 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.370 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.370 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.370 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.370 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.370 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.370 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.370 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.370 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.370 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.370 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.370 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.370 "name": "raid_bdev1", 00:10:19.370 "uuid": "7066d564-3d72-4b6b-83aa-bf8b48ac4899", 00:10:19.370 "strip_size_kb": 0, 00:10:19.370 "state": "configuring", 00:10:19.370 "raid_level": "raid1", 00:10:19.370 "superblock": true, 00:10:19.370 "num_base_bdevs": 3, 00:10:19.370 "num_base_bdevs_discovered": 1, 00:10:19.370 "num_base_bdevs_operational": 3, 00:10:19.370 "base_bdevs_list": [ 00:10:19.370 { 00:10:19.370 "name": "pt1", 00:10:19.370 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:19.370 "is_configured": true, 00:10:19.370 "data_offset": 2048, 00:10:19.370 "data_size": 63488 00:10:19.370 }, 00:10:19.370 { 00:10:19.370 "name": null, 00:10:19.370 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.370 "is_configured": false, 00:10:19.370 "data_offset": 0, 00:10:19.370 "data_size": 63488 00:10:19.370 }, 00:10:19.370 { 00:10:19.370 "name": null, 00:10:19.370 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.370 "is_configured": false, 00:10:19.370 "data_offset": 2048, 00:10:19.370 "data_size": 63488 00:10:19.370 } 00:10:19.370 ] 00:10:19.370 }' 00:10:19.370 03:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.370 03:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.939 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:19.939 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:19.939 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:19.939 03:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.939 03:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.939 [2024-11-21 03:19:07.232062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:19.939 [2024-11-21 03:19:07.232262] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.939 [2024-11-21 03:19:07.232316] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:19.939 [2024-11-21 03:19:07.232373] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.939 [2024-11-21 03:19:07.232857] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.939 [2024-11-21 03:19:07.232922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:19.939 [2024-11-21 03:19:07.233046] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:19.939 [2024-11-21 03:19:07.233111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:19.939 pt2 00:10:19.939 03:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.939 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:19.939 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:19.939 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:19.939 03:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.939 03:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.939 [2024-11-21 03:19:07.243998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:19.939 [2024-11-21 03:19:07.244167] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.939 [2024-11-21 03:19:07.244217] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:19.939 [2024-11-21 03:19:07.244260] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.939 [2024-11-21 03:19:07.244748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.939 [2024-11-21 03:19:07.244838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:19.940 [2024-11-21 03:19:07.244959] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:19.940 [2024-11-21 03:19:07.244991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:19.940 [2024-11-21 03:19:07.245123] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:19.940 [2024-11-21 03:19:07.245138] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:19.940 [2024-11-21 03:19:07.245418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:10:19.940 [2024-11-21 03:19:07.245563] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:19.940 [2024-11-21 03:19:07.245577] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:10:19.940 [2024-11-21 03:19:07.245713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.940 pt3 00:10:19.940 03:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.940 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:19.940 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:19.940 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:19.940 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.940 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:19.940 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.940 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.940 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.940 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.940 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.940 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.940 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.940 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.940 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.940 03:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.940 03:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.940 03:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.940 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.940 "name": "raid_bdev1", 00:10:19.940 "uuid": "7066d564-3d72-4b6b-83aa-bf8b48ac4899", 00:10:19.940 "strip_size_kb": 0, 00:10:19.940 "state": "online", 00:10:19.940 "raid_level": "raid1", 00:10:19.940 "superblock": true, 00:10:19.940 "num_base_bdevs": 3, 00:10:19.940 "num_base_bdevs_discovered": 3, 00:10:19.940 "num_base_bdevs_operational": 3, 00:10:19.940 "base_bdevs_list": [ 00:10:19.940 { 00:10:19.940 "name": "pt1", 00:10:19.940 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:19.940 "is_configured": true, 00:10:19.940 "data_offset": 2048, 00:10:19.940 "data_size": 63488 00:10:19.940 }, 00:10:19.940 { 00:10:19.940 "name": "pt2", 00:10:19.940 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.940 "is_configured": true, 00:10:19.940 "data_offset": 2048, 00:10:19.940 "data_size": 63488 00:10:19.940 }, 00:10:19.940 { 00:10:19.940 "name": "pt3", 00:10:19.940 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.940 "is_configured": true, 00:10:19.940 "data_offset": 2048, 00:10:19.940 "data_size": 63488 00:10:19.940 } 00:10:19.940 ] 00:10:19.940 }' 00:10:19.940 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.940 03:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.200 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:20.200 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:20.200 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:20.200 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:20.200 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:20.200 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:20.200 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:20.200 03:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.200 03:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.200 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:20.200 [2024-11-21 03:19:07.680479] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.200 03:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.200 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:20.200 "name": "raid_bdev1", 00:10:20.200 "aliases": [ 00:10:20.200 "7066d564-3d72-4b6b-83aa-bf8b48ac4899" 00:10:20.200 ], 00:10:20.200 "product_name": "Raid Volume", 00:10:20.200 "block_size": 512, 00:10:20.200 "num_blocks": 63488, 00:10:20.200 "uuid": "7066d564-3d72-4b6b-83aa-bf8b48ac4899", 00:10:20.200 "assigned_rate_limits": { 00:10:20.200 "rw_ios_per_sec": 0, 00:10:20.200 "rw_mbytes_per_sec": 0, 00:10:20.200 "r_mbytes_per_sec": 0, 00:10:20.200 "w_mbytes_per_sec": 0 00:10:20.200 }, 00:10:20.200 "claimed": false, 00:10:20.200 "zoned": false, 00:10:20.200 "supported_io_types": { 00:10:20.200 "read": true, 00:10:20.200 "write": true, 00:10:20.200 "unmap": false, 00:10:20.200 "flush": false, 00:10:20.200 "reset": true, 00:10:20.200 "nvme_admin": false, 00:10:20.200 "nvme_io": false, 00:10:20.200 "nvme_io_md": false, 00:10:20.200 "write_zeroes": true, 00:10:20.200 "zcopy": false, 00:10:20.200 "get_zone_info": false, 00:10:20.200 "zone_management": false, 00:10:20.200 "zone_append": false, 00:10:20.200 "compare": false, 00:10:20.200 "compare_and_write": false, 00:10:20.200 "abort": false, 00:10:20.200 "seek_hole": false, 00:10:20.200 "seek_data": false, 00:10:20.200 "copy": false, 00:10:20.200 "nvme_iov_md": false 00:10:20.200 }, 00:10:20.200 "memory_domains": [ 00:10:20.200 { 00:10:20.200 "dma_device_id": "system", 00:10:20.200 "dma_device_type": 1 00:10:20.200 }, 00:10:20.200 { 00:10:20.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.200 "dma_device_type": 2 00:10:20.200 }, 00:10:20.200 { 00:10:20.200 "dma_device_id": "system", 00:10:20.200 "dma_device_type": 1 00:10:20.200 }, 00:10:20.200 { 00:10:20.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.201 "dma_device_type": 2 00:10:20.201 }, 00:10:20.201 { 00:10:20.201 "dma_device_id": "system", 00:10:20.201 "dma_device_type": 1 00:10:20.201 }, 00:10:20.201 { 00:10:20.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.201 "dma_device_type": 2 00:10:20.201 } 00:10:20.201 ], 00:10:20.201 "driver_specific": { 00:10:20.201 "raid": { 00:10:20.201 "uuid": "7066d564-3d72-4b6b-83aa-bf8b48ac4899", 00:10:20.201 "strip_size_kb": 0, 00:10:20.201 "state": "online", 00:10:20.201 "raid_level": "raid1", 00:10:20.201 "superblock": true, 00:10:20.201 "num_base_bdevs": 3, 00:10:20.201 "num_base_bdevs_discovered": 3, 00:10:20.201 "num_base_bdevs_operational": 3, 00:10:20.201 "base_bdevs_list": [ 00:10:20.201 { 00:10:20.201 "name": "pt1", 00:10:20.201 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:20.201 "is_configured": true, 00:10:20.201 "data_offset": 2048, 00:10:20.201 "data_size": 63488 00:10:20.201 }, 00:10:20.201 { 00:10:20.201 "name": "pt2", 00:10:20.201 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.201 "is_configured": true, 00:10:20.201 "data_offset": 2048, 00:10:20.201 "data_size": 63488 00:10:20.201 }, 00:10:20.201 { 00:10:20.201 "name": "pt3", 00:10:20.201 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.201 "is_configured": true, 00:10:20.201 "data_offset": 2048, 00:10:20.201 "data_size": 63488 00:10:20.201 } 00:10:20.201 ] 00:10:20.201 } 00:10:20.201 } 00:10:20.201 }' 00:10:20.201 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:20.460 pt2 00:10:20.460 pt3' 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.460 [2024-11-21 03:19:07.956679] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7066d564-3d72-4b6b-83aa-bf8b48ac4899 '!=' 7066d564-3d72-4b6b-83aa-bf8b48ac4899 ']' 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:20.460 03:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:20.461 03:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.461 03:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.461 [2024-11-21 03:19:07.996320] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:20.461 03:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.461 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:20.461 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.461 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.461 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.461 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.461 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:20.461 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.461 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.461 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.461 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.461 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.461 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.461 03:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.461 03:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.720 03:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.720 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.720 "name": "raid_bdev1", 00:10:20.720 "uuid": "7066d564-3d72-4b6b-83aa-bf8b48ac4899", 00:10:20.720 "strip_size_kb": 0, 00:10:20.720 "state": "online", 00:10:20.720 "raid_level": "raid1", 00:10:20.720 "superblock": true, 00:10:20.720 "num_base_bdevs": 3, 00:10:20.720 "num_base_bdevs_discovered": 2, 00:10:20.720 "num_base_bdevs_operational": 2, 00:10:20.720 "base_bdevs_list": [ 00:10:20.720 { 00:10:20.720 "name": null, 00:10:20.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.720 "is_configured": false, 00:10:20.720 "data_offset": 0, 00:10:20.720 "data_size": 63488 00:10:20.720 }, 00:10:20.720 { 00:10:20.720 "name": "pt2", 00:10:20.720 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.720 "is_configured": true, 00:10:20.720 "data_offset": 2048, 00:10:20.720 "data_size": 63488 00:10:20.720 }, 00:10:20.720 { 00:10:20.720 "name": "pt3", 00:10:20.720 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.720 "is_configured": true, 00:10:20.720 "data_offset": 2048, 00:10:20.720 "data_size": 63488 00:10:20.720 } 00:10:20.720 ] 00:10:20.720 }' 00:10:20.720 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.720 03:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.979 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:20.979 03:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.979 03:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.979 [2024-11-21 03:19:08.452404] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:20.979 [2024-11-21 03:19:08.452547] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:20.979 [2024-11-21 03:19:08.452665] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.979 [2024-11-21 03:19:08.452749] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:20.979 [2024-11-21 03:19:08.452807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:10:20.979 03:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.979 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:20.979 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.979 03:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.980 [2024-11-21 03:19:08.528417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:20.980 [2024-11-21 03:19:08.528502] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.980 [2024-11-21 03:19:08.528521] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:20.980 [2024-11-21 03:19:08.528533] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.980 [2024-11-21 03:19:08.530846] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.980 [2024-11-21 03:19:08.530980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:20.980 [2024-11-21 03:19:08.531084] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:20.980 [2024-11-21 03:19:08.531147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:20.980 pt2 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.980 03:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.239 03:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.239 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.239 "name": "raid_bdev1", 00:10:21.239 "uuid": "7066d564-3d72-4b6b-83aa-bf8b48ac4899", 00:10:21.240 "strip_size_kb": 0, 00:10:21.240 "state": "configuring", 00:10:21.240 "raid_level": "raid1", 00:10:21.240 "superblock": true, 00:10:21.240 "num_base_bdevs": 3, 00:10:21.240 "num_base_bdevs_discovered": 1, 00:10:21.240 "num_base_bdevs_operational": 2, 00:10:21.240 "base_bdevs_list": [ 00:10:21.240 { 00:10:21.240 "name": null, 00:10:21.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.240 "is_configured": false, 00:10:21.240 "data_offset": 2048, 00:10:21.240 "data_size": 63488 00:10:21.240 }, 00:10:21.240 { 00:10:21.240 "name": "pt2", 00:10:21.240 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.240 "is_configured": true, 00:10:21.240 "data_offset": 2048, 00:10:21.240 "data_size": 63488 00:10:21.240 }, 00:10:21.240 { 00:10:21.240 "name": null, 00:10:21.240 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:21.240 "is_configured": false, 00:10:21.240 "data_offset": 2048, 00:10:21.240 "data_size": 63488 00:10:21.240 } 00:10:21.240 ] 00:10:21.240 }' 00:10:21.240 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.240 03:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.500 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:21.500 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:21.500 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:10:21.500 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:21.500 03:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.500 03:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.500 [2024-11-21 03:19:08.980614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:21.500 [2024-11-21 03:19:08.980814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.500 [2024-11-21 03:19:08.980869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:21.500 [2024-11-21 03:19:08.980919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.500 [2024-11-21 03:19:08.981416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.500 [2024-11-21 03:19:08.981488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:21.500 [2024-11-21 03:19:08.981613] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:21.500 [2024-11-21 03:19:08.981677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:21.500 [2024-11-21 03:19:08.981811] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:21.500 [2024-11-21 03:19:08.981861] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:21.500 [2024-11-21 03:19:08.982175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:21.500 [2024-11-21 03:19:08.982353] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:21.500 [2024-11-21 03:19:08.982395] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:21.500 [2024-11-21 03:19:08.982571] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.500 pt3 00:10:21.500 03:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.500 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:21.500 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.500 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.500 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.500 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.500 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:21.500 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.500 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.500 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.500 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.500 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.500 03:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.500 03:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.500 03:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.500 03:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.500 03:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.500 "name": "raid_bdev1", 00:10:21.500 "uuid": "7066d564-3d72-4b6b-83aa-bf8b48ac4899", 00:10:21.500 "strip_size_kb": 0, 00:10:21.500 "state": "online", 00:10:21.500 "raid_level": "raid1", 00:10:21.500 "superblock": true, 00:10:21.500 "num_base_bdevs": 3, 00:10:21.500 "num_base_bdevs_discovered": 2, 00:10:21.500 "num_base_bdevs_operational": 2, 00:10:21.500 "base_bdevs_list": [ 00:10:21.500 { 00:10:21.500 "name": null, 00:10:21.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.500 "is_configured": false, 00:10:21.500 "data_offset": 2048, 00:10:21.500 "data_size": 63488 00:10:21.500 }, 00:10:21.500 { 00:10:21.500 "name": "pt2", 00:10:21.500 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.500 "is_configured": true, 00:10:21.500 "data_offset": 2048, 00:10:21.500 "data_size": 63488 00:10:21.500 }, 00:10:21.500 { 00:10:21.500 "name": "pt3", 00:10:21.500 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:21.500 "is_configured": true, 00:10:21.500 "data_offset": 2048, 00:10:21.500 "data_size": 63488 00:10:21.500 } 00:10:21.501 ] 00:10:21.501 }' 00:10:21.501 03:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.501 03:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.076 03:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:22.076 03:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.076 03:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.076 [2024-11-21 03:19:09.448743] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:22.076 [2024-11-21 03:19:09.448800] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:22.076 [2024-11-21 03:19:09.448896] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.076 [2024-11-21 03:19:09.448968] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.077 [2024-11-21 03:19:09.448979] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.077 [2024-11-21 03:19:09.524733] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:22.077 [2024-11-21 03:19:09.524841] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.077 [2024-11-21 03:19:09.524866] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:22.077 [2024-11-21 03:19:09.524877] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.077 [2024-11-21 03:19:09.527203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.077 [2024-11-21 03:19:09.527253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:22.077 [2024-11-21 03:19:09.527345] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:22.077 [2024-11-21 03:19:09.527388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:22.077 [2024-11-21 03:19:09.527512] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:22.077 [2024-11-21 03:19:09.527530] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:22.077 [2024-11-21 03:19:09.527556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:10:22.077 [2024-11-21 03:19:09.527594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:22.077 pt1 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.077 "name": "raid_bdev1", 00:10:22.077 "uuid": "7066d564-3d72-4b6b-83aa-bf8b48ac4899", 00:10:22.077 "strip_size_kb": 0, 00:10:22.077 "state": "configuring", 00:10:22.077 "raid_level": "raid1", 00:10:22.077 "superblock": true, 00:10:22.077 "num_base_bdevs": 3, 00:10:22.077 "num_base_bdevs_discovered": 1, 00:10:22.077 "num_base_bdevs_operational": 2, 00:10:22.077 "base_bdevs_list": [ 00:10:22.077 { 00:10:22.077 "name": null, 00:10:22.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.077 "is_configured": false, 00:10:22.077 "data_offset": 2048, 00:10:22.077 "data_size": 63488 00:10:22.077 }, 00:10:22.077 { 00:10:22.077 "name": "pt2", 00:10:22.077 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:22.077 "is_configured": true, 00:10:22.077 "data_offset": 2048, 00:10:22.077 "data_size": 63488 00:10:22.077 }, 00:10:22.077 { 00:10:22.077 "name": null, 00:10:22.077 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:22.077 "is_configured": false, 00:10:22.077 "data_offset": 2048, 00:10:22.077 "data_size": 63488 00:10:22.077 } 00:10:22.077 ] 00:10:22.077 }' 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.077 03:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.647 03:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:22.647 03:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.647 03:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.647 03:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:22.647 03:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.647 03:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:22.647 03:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:22.647 03:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.647 03:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.647 [2024-11-21 03:19:10.032913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:22.647 [2024-11-21 03:19:10.033121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.647 [2024-11-21 03:19:10.033176] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:22.647 [2024-11-21 03:19:10.033214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.647 [2024-11-21 03:19:10.033711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.647 [2024-11-21 03:19:10.033777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:22.647 [2024-11-21 03:19:10.033893] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:22.647 [2024-11-21 03:19:10.033981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:22.647 [2024-11-21 03:19:10.034140] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:22.647 [2024-11-21 03:19:10.034183] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:22.647 [2024-11-21 03:19:10.034468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:10:22.647 [2024-11-21 03:19:10.034650] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:22.647 [2024-11-21 03:19:10.034700] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:22.647 [2024-11-21 03:19:10.034859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.647 pt3 00:10:22.647 03:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.647 03:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:22.647 03:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.647 03:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.647 03:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:22.647 03:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:22.647 03:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:22.647 03:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.647 03:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.647 03:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.647 03:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.647 03:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.647 03:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.647 03:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.647 03:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.647 03:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.647 03:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.647 "name": "raid_bdev1", 00:10:22.647 "uuid": "7066d564-3d72-4b6b-83aa-bf8b48ac4899", 00:10:22.647 "strip_size_kb": 0, 00:10:22.647 "state": "online", 00:10:22.647 "raid_level": "raid1", 00:10:22.647 "superblock": true, 00:10:22.647 "num_base_bdevs": 3, 00:10:22.647 "num_base_bdevs_discovered": 2, 00:10:22.647 "num_base_bdevs_operational": 2, 00:10:22.647 "base_bdevs_list": [ 00:10:22.647 { 00:10:22.647 "name": null, 00:10:22.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.647 "is_configured": false, 00:10:22.647 "data_offset": 2048, 00:10:22.647 "data_size": 63488 00:10:22.647 }, 00:10:22.647 { 00:10:22.647 "name": "pt2", 00:10:22.647 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:22.647 "is_configured": true, 00:10:22.647 "data_offset": 2048, 00:10:22.647 "data_size": 63488 00:10:22.647 }, 00:10:22.647 { 00:10:22.647 "name": "pt3", 00:10:22.647 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:22.647 "is_configured": true, 00:10:22.647 "data_offset": 2048, 00:10:22.647 "data_size": 63488 00:10:22.647 } 00:10:22.647 ] 00:10:22.647 }' 00:10:22.647 03:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.647 03:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.217 03:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:23.217 03:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:23.217 03:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.217 03:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.217 03:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.217 03:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:23.217 03:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:23.217 03:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:23.217 03:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.217 03:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.217 [2024-11-21 03:19:10.541338] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.217 03:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.217 03:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 7066d564-3d72-4b6b-83aa-bf8b48ac4899 '!=' 7066d564-3d72-4b6b-83aa-bf8b48ac4899 ']' 00:10:23.217 03:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81702 00:10:23.217 03:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81702 ']' 00:10:23.217 03:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81702 00:10:23.217 03:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:23.217 03:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:23.217 03:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81702 00:10:23.217 03:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:23.217 03:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:23.217 03:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81702' 00:10:23.217 killing process with pid 81702 00:10:23.217 03:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 81702 00:10:23.217 [2024-11-21 03:19:10.629522] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:23.217 [2024-11-21 03:19:10.629646] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:23.217 03:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 81702 00:10:23.217 [2024-11-21 03:19:10.629715] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:23.217 [2024-11-21 03:19:10.629729] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:23.217 [2024-11-21 03:19:10.665509] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:23.478 ************************************ 00:10:23.478 END TEST raid_superblock_test 00:10:23.478 ************************************ 00:10:23.478 03:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:23.478 00:10:23.478 real 0m6.586s 00:10:23.478 user 0m10.970s 00:10:23.478 sys 0m1.465s 00:10:23.478 03:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.478 03:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.478 03:19:10 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:23.478 03:19:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:23.478 03:19:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.478 03:19:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:23.479 ************************************ 00:10:23.479 START TEST raid_read_error_test 00:10:23.479 ************************************ 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tllxV4nJQT 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=82138 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 82138 00:10:23.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 82138 ']' 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:23.479 03:19:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.741 [2024-11-21 03:19:11.083224] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:10:23.741 [2024-11-21 03:19:11.083395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82138 ] 00:10:23.741 [2024-11-21 03:19:11.226913] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:23.741 [2024-11-21 03:19:11.263267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.741 [2024-11-21 03:19:11.294397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.001 [2024-11-21 03:19:11.339274] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.001 [2024-11-21 03:19:11.339326] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.572 03:19:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.572 03:19:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:24.572 03:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:24.572 03:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:24.572 03:19:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.572 03:19:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.572 BaseBdev1_malloc 00:10:24.572 03:19:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.572 03:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:24.572 03:19:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.572 03:19:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.572 true 00:10:24.572 03:19:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.572 03:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:24.572 03:19:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.572 03:19:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.572 [2024-11-21 03:19:11.980635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:24.572 [2024-11-21 03:19:11.980814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.572 [2024-11-21 03:19:11.980864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:24.572 [2024-11-21 03:19:11.980927] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.572 [2024-11-21 03:19:11.983365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.572 [2024-11-21 03:19:11.983473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:24.572 BaseBdev1 00:10:24.572 03:19:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.572 03:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:24.572 03:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:24.572 03:19:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.572 03:19:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.572 BaseBdev2_malloc 00:10:24.572 03:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.572 03:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:24.572 03:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.572 03:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.572 true 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.573 [2024-11-21 03:19:12.021936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:24.573 [2024-11-21 03:19:12.022119] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.573 [2024-11-21 03:19:12.022167] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:24.573 [2024-11-21 03:19:12.022204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.573 [2024-11-21 03:19:12.024624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.573 [2024-11-21 03:19:12.024723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:24.573 BaseBdev2 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.573 BaseBdev3_malloc 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.573 true 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.573 [2024-11-21 03:19:12.062971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:24.573 [2024-11-21 03:19:12.063174] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.573 [2024-11-21 03:19:12.063203] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:24.573 [2024-11-21 03:19:12.063216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.573 [2024-11-21 03:19:12.065537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.573 [2024-11-21 03:19:12.065586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:24.573 BaseBdev3 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.573 [2024-11-21 03:19:12.075040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:24.573 [2024-11-21 03:19:12.077193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:24.573 [2024-11-21 03:19:12.077296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:24.573 [2024-11-21 03:19:12.077491] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:24.573 [2024-11-21 03:19:12.077504] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:24.573 [2024-11-21 03:19:12.077796] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006970 00:10:24.573 [2024-11-21 03:19:12.077954] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:24.573 [2024-11-21 03:19:12.077967] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:24.573 [2024-11-21 03:19:12.078134] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.573 "name": "raid_bdev1", 00:10:24.573 "uuid": "08eaee80-0da0-4e8f-be93-90d9a4d0b863", 00:10:24.573 "strip_size_kb": 0, 00:10:24.573 "state": "online", 00:10:24.573 "raid_level": "raid1", 00:10:24.573 "superblock": true, 00:10:24.573 "num_base_bdevs": 3, 00:10:24.573 "num_base_bdevs_discovered": 3, 00:10:24.573 "num_base_bdevs_operational": 3, 00:10:24.573 "base_bdevs_list": [ 00:10:24.573 { 00:10:24.573 "name": "BaseBdev1", 00:10:24.573 "uuid": "9951128f-d91d-5137-be6b-1f77dd67b2aa", 00:10:24.573 "is_configured": true, 00:10:24.573 "data_offset": 2048, 00:10:24.573 "data_size": 63488 00:10:24.573 }, 00:10:24.573 { 00:10:24.573 "name": "BaseBdev2", 00:10:24.573 "uuid": "931564ac-ffe8-5007-9a4a-eb1dee85a2d1", 00:10:24.573 "is_configured": true, 00:10:24.573 "data_offset": 2048, 00:10:24.573 "data_size": 63488 00:10:24.573 }, 00:10:24.573 { 00:10:24.573 "name": "BaseBdev3", 00:10:24.573 "uuid": "14dd2aec-f184-582a-bcc1-b4388653f142", 00:10:24.573 "is_configured": true, 00:10:24.573 "data_offset": 2048, 00:10:24.573 "data_size": 63488 00:10:24.573 } 00:10:24.573 ] 00:10:24.573 }' 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.573 03:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.143 03:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:25.143 03:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:25.143 [2024-11-21 03:19:12.643650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006b10 00:10:26.082 03:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:26.082 03:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.082 03:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.082 03:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.082 03:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:26.082 03:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:26.082 03:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:26.082 03:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:26.082 03:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:26.082 03:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.082 03:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.082 03:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.082 03:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.082 03:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.082 03:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.082 03:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.082 03:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.082 03:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.082 03:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.082 03:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.082 03:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.082 03:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.082 03:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.082 03:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.082 "name": "raid_bdev1", 00:10:26.082 "uuid": "08eaee80-0da0-4e8f-be93-90d9a4d0b863", 00:10:26.082 "strip_size_kb": 0, 00:10:26.082 "state": "online", 00:10:26.082 "raid_level": "raid1", 00:10:26.082 "superblock": true, 00:10:26.082 "num_base_bdevs": 3, 00:10:26.082 "num_base_bdevs_discovered": 3, 00:10:26.082 "num_base_bdevs_operational": 3, 00:10:26.082 "base_bdevs_list": [ 00:10:26.083 { 00:10:26.083 "name": "BaseBdev1", 00:10:26.083 "uuid": "9951128f-d91d-5137-be6b-1f77dd67b2aa", 00:10:26.083 "is_configured": true, 00:10:26.083 "data_offset": 2048, 00:10:26.083 "data_size": 63488 00:10:26.083 }, 00:10:26.083 { 00:10:26.083 "name": "BaseBdev2", 00:10:26.083 "uuid": "931564ac-ffe8-5007-9a4a-eb1dee85a2d1", 00:10:26.083 "is_configured": true, 00:10:26.083 "data_offset": 2048, 00:10:26.083 "data_size": 63488 00:10:26.083 }, 00:10:26.083 { 00:10:26.083 "name": "BaseBdev3", 00:10:26.083 "uuid": "14dd2aec-f184-582a-bcc1-b4388653f142", 00:10:26.083 "is_configured": true, 00:10:26.083 "data_offset": 2048, 00:10:26.083 "data_size": 63488 00:10:26.083 } 00:10:26.083 ] 00:10:26.083 }' 00:10:26.083 03:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.083 03:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.654 03:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:26.654 03:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.654 03:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.654 [2024-11-21 03:19:14.018290] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:26.654 [2024-11-21 03:19:14.018430] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:26.654 [2024-11-21 03:19:14.021140] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:26.654 [2024-11-21 03:19:14.021193] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.654 [2024-11-21 03:19:14.021296] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:26.654 [2024-11-21 03:19:14.021307] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:26.654 { 00:10:26.654 "results": [ 00:10:26.654 { 00:10:26.654 "job": "raid_bdev1", 00:10:26.654 "core_mask": "0x1", 00:10:26.654 "workload": "randrw", 00:10:26.654 "percentage": 50, 00:10:26.654 "status": "finished", 00:10:26.654 "queue_depth": 1, 00:10:26.654 "io_size": 131072, 00:10:26.654 "runtime": 1.372399, 00:10:26.654 "iops": 12838.831855750404, 00:10:26.654 "mibps": 1604.8539819688006, 00:10:26.654 "io_failed": 0, 00:10:26.654 "io_timeout": 0, 00:10:26.654 "avg_latency_us": 75.08939124013513, 00:10:26.654 "min_latency_us": 24.990848078096402, 00:10:26.654 "max_latency_us": 1692.2374270025277 00:10:26.654 } 00:10:26.654 ], 00:10:26.654 "core_count": 1 00:10:26.654 } 00:10:26.654 03:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.654 03:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 82138 00:10:26.654 03:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 82138 ']' 00:10:26.654 03:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 82138 00:10:26.654 03:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:26.654 03:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:26.654 03:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82138 00:10:26.654 03:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:26.654 killing process with pid 82138 00:10:26.654 03:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:26.654 03:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82138' 00:10:26.654 03:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 82138 00:10:26.654 [2024-11-21 03:19:14.070001] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:26.654 03:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 82138 00:10:26.654 [2024-11-21 03:19:14.097140] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:26.915 03:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tllxV4nJQT 00:10:26.915 03:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:26.915 03:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:26.915 03:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:26.915 03:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:26.915 03:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:26.915 03:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:26.915 03:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:26.915 00:10:26.915 real 0m3.367s 00:10:26.915 user 0m4.299s 00:10:26.915 sys 0m0.576s 00:10:26.915 ************************************ 00:10:26.915 END TEST raid_read_error_test 00:10:26.915 ************************************ 00:10:26.915 03:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:26.915 03:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.915 03:19:14 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:26.915 03:19:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:26.915 03:19:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.915 03:19:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:26.915 ************************************ 00:10:26.915 START TEST raid_write_error_test 00:10:26.915 ************************************ 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tIoSrn4Hhz 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=82267 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 82267 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 82267 ']' 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:26.915 03:19:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.175 [2024-11-21 03:19:14.519796] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:10:27.175 [2024-11-21 03:19:14.519942] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82267 ] 00:10:27.175 [2024-11-21 03:19:14.662880] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:27.175 [2024-11-21 03:19:14.692280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.175 [2024-11-21 03:19:14.722670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.435 [2024-11-21 03:19:14.766084] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:27.435 [2024-11-21 03:19:14.766218] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:28.006 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:28.006 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:28.006 03:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:28.006 03:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:28.006 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.006 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.006 BaseBdev1_malloc 00:10:28.006 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.006 03:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:28.006 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.006 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.006 true 00:10:28.006 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.006 03:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:28.006 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.006 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.006 [2024-11-21 03:19:15.406200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:28.006 [2024-11-21 03:19:15.406280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.006 [2024-11-21 03:19:15.406301] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:28.006 [2024-11-21 03:19:15.406324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.006 [2024-11-21 03:19:15.408569] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.006 [2024-11-21 03:19:15.408711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:28.006 BaseBdev1 00:10:28.006 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.006 03:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:28.006 03:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:28.006 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.006 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.006 BaseBdev2_malloc 00:10:28.006 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.006 03:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.007 true 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.007 [2024-11-21 03:19:15.447314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:28.007 [2024-11-21 03:19:15.447487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.007 [2024-11-21 03:19:15.447511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:28.007 [2024-11-21 03:19:15.447523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.007 [2024-11-21 03:19:15.449726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.007 [2024-11-21 03:19:15.449775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:28.007 BaseBdev2 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.007 BaseBdev3_malloc 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.007 true 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.007 [2024-11-21 03:19:15.488234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:28.007 [2024-11-21 03:19:15.488308] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.007 [2024-11-21 03:19:15.488328] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:28.007 [2024-11-21 03:19:15.488339] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.007 [2024-11-21 03:19:15.490523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.007 [2024-11-21 03:19:15.490664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:28.007 BaseBdev3 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.007 [2024-11-21 03:19:15.500278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:28.007 [2024-11-21 03:19:15.502250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:28.007 [2024-11-21 03:19:15.502335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:28.007 [2024-11-21 03:19:15.502533] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:28.007 [2024-11-21 03:19:15.502545] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:28.007 [2024-11-21 03:19:15.502831] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006970 00:10:28.007 [2024-11-21 03:19:15.502980] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:28.007 [2024-11-21 03:19:15.502993] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:28.007 [2024-11-21 03:19:15.503166] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.007 "name": "raid_bdev1", 00:10:28.007 "uuid": "ed534a61-a6ee-4101-b174-a1ee1724065b", 00:10:28.007 "strip_size_kb": 0, 00:10:28.007 "state": "online", 00:10:28.007 "raid_level": "raid1", 00:10:28.007 "superblock": true, 00:10:28.007 "num_base_bdevs": 3, 00:10:28.007 "num_base_bdevs_discovered": 3, 00:10:28.007 "num_base_bdevs_operational": 3, 00:10:28.007 "base_bdevs_list": [ 00:10:28.007 { 00:10:28.007 "name": "BaseBdev1", 00:10:28.007 "uuid": "324e5b63-3e94-5b78-b680-099f900f2969", 00:10:28.007 "is_configured": true, 00:10:28.007 "data_offset": 2048, 00:10:28.007 "data_size": 63488 00:10:28.007 }, 00:10:28.007 { 00:10:28.007 "name": "BaseBdev2", 00:10:28.007 "uuid": "521f758f-a0e1-5641-8a8a-7a51c633ff12", 00:10:28.007 "is_configured": true, 00:10:28.007 "data_offset": 2048, 00:10:28.007 "data_size": 63488 00:10:28.007 }, 00:10:28.007 { 00:10:28.007 "name": "BaseBdev3", 00:10:28.007 "uuid": "b510797e-5dc3-5d51-810c-25085a5533ea", 00:10:28.007 "is_configured": true, 00:10:28.007 "data_offset": 2048, 00:10:28.007 "data_size": 63488 00:10:28.007 } 00:10:28.007 ] 00:10:28.007 }' 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.007 03:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.577 03:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:28.577 03:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:28.577 [2024-11-21 03:19:16.064906] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006b10 00:10:29.516 03:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:29.516 03:19:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.516 03:19:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.516 [2024-11-21 03:19:16.982929] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:29.516 [2024-11-21 03:19:16.983139] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:29.516 [2024-11-21 03:19:16.983411] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006b10 00:10:29.516 03:19:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.516 03:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:29.516 03:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:29.516 03:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:29.516 03:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:29.516 03:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:29.516 03:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:29.516 03:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.516 03:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.516 03:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.516 03:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:29.516 03:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.516 03:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.516 03:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.516 03:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.516 03:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.517 03:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.517 03:19:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.517 03:19:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.517 03:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.517 03:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.517 "name": "raid_bdev1", 00:10:29.517 "uuid": "ed534a61-a6ee-4101-b174-a1ee1724065b", 00:10:29.517 "strip_size_kb": 0, 00:10:29.517 "state": "online", 00:10:29.517 "raid_level": "raid1", 00:10:29.517 "superblock": true, 00:10:29.517 "num_base_bdevs": 3, 00:10:29.517 "num_base_bdevs_discovered": 2, 00:10:29.517 "num_base_bdevs_operational": 2, 00:10:29.517 "base_bdevs_list": [ 00:10:29.517 { 00:10:29.517 "name": null, 00:10:29.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.517 "is_configured": false, 00:10:29.517 "data_offset": 0, 00:10:29.517 "data_size": 63488 00:10:29.517 }, 00:10:29.517 { 00:10:29.517 "name": "BaseBdev2", 00:10:29.517 "uuid": "521f758f-a0e1-5641-8a8a-7a51c633ff12", 00:10:29.517 "is_configured": true, 00:10:29.517 "data_offset": 2048, 00:10:29.517 "data_size": 63488 00:10:29.517 }, 00:10:29.517 { 00:10:29.517 "name": "BaseBdev3", 00:10:29.517 "uuid": "b510797e-5dc3-5d51-810c-25085a5533ea", 00:10:29.517 "is_configured": true, 00:10:29.517 "data_offset": 2048, 00:10:29.517 "data_size": 63488 00:10:29.517 } 00:10:29.517 ] 00:10:29.517 }' 00:10:29.517 03:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.517 03:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.086 03:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:30.086 03:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.086 03:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.086 [2024-11-21 03:19:17.466297] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:30.086 [2024-11-21 03:19:17.466444] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:30.086 [2024-11-21 03:19:17.469453] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.086 [2024-11-21 03:19:17.469512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.086 [2024-11-21 03:19:17.469592] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:30.086 [2024-11-21 03:19:17.469608] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:30.086 { 00:10:30.086 "results": [ 00:10:30.086 { 00:10:30.086 "job": "raid_bdev1", 00:10:30.086 "core_mask": "0x1", 00:10:30.086 "workload": "randrw", 00:10:30.086 "percentage": 50, 00:10:30.086 "status": "finished", 00:10:30.086 "queue_depth": 1, 00:10:30.086 "io_size": 131072, 00:10:30.086 "runtime": 1.399341, 00:10:30.086 "iops": 14067.335981722825, 00:10:30.086 "mibps": 1758.416997715353, 00:10:30.086 "io_failed": 0, 00:10:30.086 "io_timeout": 0, 00:10:30.086 "avg_latency_us": 68.22488376537808, 00:10:30.086 "min_latency_us": 25.325546936285193, 00:10:30.086 "max_latency_us": 1613.6947616142247 00:10:30.086 } 00:10:30.086 ], 00:10:30.086 "core_count": 1 00:10:30.086 } 00:10:30.086 03:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.086 03:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 82267 00:10:30.086 03:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 82267 ']' 00:10:30.086 03:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 82267 00:10:30.086 03:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:30.086 03:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:30.086 03:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82267 00:10:30.086 03:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:30.086 03:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:30.086 03:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82267' 00:10:30.086 killing process with pid 82267 00:10:30.086 03:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 82267 00:10:30.086 [2024-11-21 03:19:17.517861] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:30.086 03:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 82267 00:10:30.086 [2024-11-21 03:19:17.545186] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:30.346 03:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:30.346 03:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tIoSrn4Hhz 00:10:30.346 03:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:30.346 03:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:30.346 03:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:30.346 03:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:30.346 03:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:30.346 ************************************ 00:10:30.346 END TEST raid_write_error_test 00:10:30.346 ************************************ 00:10:30.346 03:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:30.346 00:10:30.346 real 0m3.365s 00:10:30.346 user 0m4.325s 00:10:30.346 sys 0m0.557s 00:10:30.346 03:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.346 03:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.346 03:19:17 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:30.346 03:19:17 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:30.346 03:19:17 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:30.346 03:19:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:30.346 03:19:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.346 03:19:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:30.346 ************************************ 00:10:30.347 START TEST raid_state_function_test 00:10:30.347 ************************************ 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:30.347 Process raid pid: 82394 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82394 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82394' 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82394 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82394 ']' 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:30.347 03:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.606 [2024-11-21 03:19:17.949598] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:10:30.606 [2024-11-21 03:19:17.949917] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.606 [2024-11-21 03:19:18.093423] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:30.607 [2024-11-21 03:19:18.116404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.607 [2024-11-21 03:19:18.147621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.866 [2024-11-21 03:19:18.192516] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.866 [2024-11-21 03:19:18.192626] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.435 03:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:31.435 03:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:31.435 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:31.435 03:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.435 03:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.435 [2024-11-21 03:19:18.823785] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:31.435 [2024-11-21 03:19:18.823959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:31.435 [2024-11-21 03:19:18.824003] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:31.435 [2024-11-21 03:19:18.824069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:31.435 [2024-11-21 03:19:18.824102] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:31.435 [2024-11-21 03:19:18.824153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:31.435 [2024-11-21 03:19:18.824187] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:31.435 [2024-11-21 03:19:18.824214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:31.435 03:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.435 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:31.435 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.435 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.435 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.435 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.435 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.435 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.435 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.435 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.435 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.435 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.435 03:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.435 03:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.435 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.435 03:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.435 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.435 "name": "Existed_Raid", 00:10:31.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.435 "strip_size_kb": 64, 00:10:31.435 "state": "configuring", 00:10:31.435 "raid_level": "raid0", 00:10:31.435 "superblock": false, 00:10:31.435 "num_base_bdevs": 4, 00:10:31.435 "num_base_bdevs_discovered": 0, 00:10:31.435 "num_base_bdevs_operational": 4, 00:10:31.435 "base_bdevs_list": [ 00:10:31.435 { 00:10:31.435 "name": "BaseBdev1", 00:10:31.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.435 "is_configured": false, 00:10:31.435 "data_offset": 0, 00:10:31.435 "data_size": 0 00:10:31.435 }, 00:10:31.435 { 00:10:31.435 "name": "BaseBdev2", 00:10:31.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.435 "is_configured": false, 00:10:31.435 "data_offset": 0, 00:10:31.435 "data_size": 0 00:10:31.435 }, 00:10:31.435 { 00:10:31.435 "name": "BaseBdev3", 00:10:31.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.435 "is_configured": false, 00:10:31.435 "data_offset": 0, 00:10:31.435 "data_size": 0 00:10:31.435 }, 00:10:31.435 { 00:10:31.435 "name": "BaseBdev4", 00:10:31.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.435 "is_configured": false, 00:10:31.435 "data_offset": 0, 00:10:31.435 "data_size": 0 00:10:31.435 } 00:10:31.435 ] 00:10:31.435 }' 00:10:31.435 03:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.435 03:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.005 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:32.005 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.005 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.005 [2024-11-21 03:19:19.311809] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:32.005 [2024-11-21 03:19:19.311953] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:10:32.005 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.005 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:32.005 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.005 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.005 [2024-11-21 03:19:19.323845] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:32.005 [2024-11-21 03:19:19.323956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:32.005 [2024-11-21 03:19:19.324025] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:32.005 [2024-11-21 03:19:19.324073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:32.005 [2024-11-21 03:19:19.324110] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:32.005 [2024-11-21 03:19:19.324145] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:32.005 [2024-11-21 03:19:19.324178] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:32.005 [2024-11-21 03:19:19.324215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:32.005 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.005 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:32.005 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.006 [2024-11-21 03:19:19.344809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:32.006 BaseBdev1 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.006 [ 00:10:32.006 { 00:10:32.006 "name": "BaseBdev1", 00:10:32.006 "aliases": [ 00:10:32.006 "8c8de418-e2e0-4959-a38c-a50050186d51" 00:10:32.006 ], 00:10:32.006 "product_name": "Malloc disk", 00:10:32.006 "block_size": 512, 00:10:32.006 "num_blocks": 65536, 00:10:32.006 "uuid": "8c8de418-e2e0-4959-a38c-a50050186d51", 00:10:32.006 "assigned_rate_limits": { 00:10:32.006 "rw_ios_per_sec": 0, 00:10:32.006 "rw_mbytes_per_sec": 0, 00:10:32.006 "r_mbytes_per_sec": 0, 00:10:32.006 "w_mbytes_per_sec": 0 00:10:32.006 }, 00:10:32.006 "claimed": true, 00:10:32.006 "claim_type": "exclusive_write", 00:10:32.006 "zoned": false, 00:10:32.006 "supported_io_types": { 00:10:32.006 "read": true, 00:10:32.006 "write": true, 00:10:32.006 "unmap": true, 00:10:32.006 "flush": true, 00:10:32.006 "reset": true, 00:10:32.006 "nvme_admin": false, 00:10:32.006 "nvme_io": false, 00:10:32.006 "nvme_io_md": false, 00:10:32.006 "write_zeroes": true, 00:10:32.006 "zcopy": true, 00:10:32.006 "get_zone_info": false, 00:10:32.006 "zone_management": false, 00:10:32.006 "zone_append": false, 00:10:32.006 "compare": false, 00:10:32.006 "compare_and_write": false, 00:10:32.006 "abort": true, 00:10:32.006 "seek_hole": false, 00:10:32.006 "seek_data": false, 00:10:32.006 "copy": true, 00:10:32.006 "nvme_iov_md": false 00:10:32.006 }, 00:10:32.006 "memory_domains": [ 00:10:32.006 { 00:10:32.006 "dma_device_id": "system", 00:10:32.006 "dma_device_type": 1 00:10:32.006 }, 00:10:32.006 { 00:10:32.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.006 "dma_device_type": 2 00:10:32.006 } 00:10:32.006 ], 00:10:32.006 "driver_specific": {} 00:10:32.006 } 00:10:32.006 ] 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.006 "name": "Existed_Raid", 00:10:32.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.006 "strip_size_kb": 64, 00:10:32.006 "state": "configuring", 00:10:32.006 "raid_level": "raid0", 00:10:32.006 "superblock": false, 00:10:32.006 "num_base_bdevs": 4, 00:10:32.006 "num_base_bdevs_discovered": 1, 00:10:32.006 "num_base_bdevs_operational": 4, 00:10:32.006 "base_bdevs_list": [ 00:10:32.006 { 00:10:32.006 "name": "BaseBdev1", 00:10:32.006 "uuid": "8c8de418-e2e0-4959-a38c-a50050186d51", 00:10:32.006 "is_configured": true, 00:10:32.006 "data_offset": 0, 00:10:32.006 "data_size": 65536 00:10:32.006 }, 00:10:32.006 { 00:10:32.006 "name": "BaseBdev2", 00:10:32.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.006 "is_configured": false, 00:10:32.006 "data_offset": 0, 00:10:32.006 "data_size": 0 00:10:32.006 }, 00:10:32.006 { 00:10:32.006 "name": "BaseBdev3", 00:10:32.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.006 "is_configured": false, 00:10:32.006 "data_offset": 0, 00:10:32.006 "data_size": 0 00:10:32.006 }, 00:10:32.006 { 00:10:32.006 "name": "BaseBdev4", 00:10:32.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.006 "is_configured": false, 00:10:32.006 "data_offset": 0, 00:10:32.006 "data_size": 0 00:10:32.006 } 00:10:32.006 ] 00:10:32.006 }' 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.006 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.267 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:32.267 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.267 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.267 [2024-11-21 03:19:19.793015] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:32.267 [2024-11-21 03:19:19.793200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:32.267 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.267 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:32.267 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.267 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.267 [2024-11-21 03:19:19.805063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:32.267 [2024-11-21 03:19:19.806981] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:32.267 [2024-11-21 03:19:19.807060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:32.267 [2024-11-21 03:19:19.807073] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:32.267 [2024-11-21 03:19:19.807081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:32.267 [2024-11-21 03:19:19.807090] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:32.267 [2024-11-21 03:19:19.807098] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:32.267 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.267 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:32.267 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:32.267 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:32.267 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.267 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.267 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.267 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.267 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.267 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.267 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.267 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.267 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.267 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.267 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.267 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.267 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.527 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.527 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.527 "name": "Existed_Raid", 00:10:32.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.527 "strip_size_kb": 64, 00:10:32.527 "state": "configuring", 00:10:32.527 "raid_level": "raid0", 00:10:32.527 "superblock": false, 00:10:32.527 "num_base_bdevs": 4, 00:10:32.527 "num_base_bdevs_discovered": 1, 00:10:32.527 "num_base_bdevs_operational": 4, 00:10:32.527 "base_bdevs_list": [ 00:10:32.527 { 00:10:32.527 "name": "BaseBdev1", 00:10:32.527 "uuid": "8c8de418-e2e0-4959-a38c-a50050186d51", 00:10:32.527 "is_configured": true, 00:10:32.527 "data_offset": 0, 00:10:32.527 "data_size": 65536 00:10:32.527 }, 00:10:32.527 { 00:10:32.527 "name": "BaseBdev2", 00:10:32.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.527 "is_configured": false, 00:10:32.527 "data_offset": 0, 00:10:32.527 "data_size": 0 00:10:32.527 }, 00:10:32.527 { 00:10:32.527 "name": "BaseBdev3", 00:10:32.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.527 "is_configured": false, 00:10:32.527 "data_offset": 0, 00:10:32.527 "data_size": 0 00:10:32.527 }, 00:10:32.527 { 00:10:32.527 "name": "BaseBdev4", 00:10:32.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.527 "is_configured": false, 00:10:32.527 "data_offset": 0, 00:10:32.527 "data_size": 0 00:10:32.527 } 00:10:32.527 ] 00:10:32.527 }' 00:10:32.527 03:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.527 03:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.787 [2024-11-21 03:19:20.236457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:32.787 BaseBdev2 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.787 [ 00:10:32.787 { 00:10:32.787 "name": "BaseBdev2", 00:10:32.787 "aliases": [ 00:10:32.787 "bd9d1865-24f6-485e-a676-f20e7a2e6e59" 00:10:32.787 ], 00:10:32.787 "product_name": "Malloc disk", 00:10:32.787 "block_size": 512, 00:10:32.787 "num_blocks": 65536, 00:10:32.787 "uuid": "bd9d1865-24f6-485e-a676-f20e7a2e6e59", 00:10:32.787 "assigned_rate_limits": { 00:10:32.787 "rw_ios_per_sec": 0, 00:10:32.787 "rw_mbytes_per_sec": 0, 00:10:32.787 "r_mbytes_per_sec": 0, 00:10:32.787 "w_mbytes_per_sec": 0 00:10:32.787 }, 00:10:32.787 "claimed": true, 00:10:32.787 "claim_type": "exclusive_write", 00:10:32.787 "zoned": false, 00:10:32.787 "supported_io_types": { 00:10:32.787 "read": true, 00:10:32.787 "write": true, 00:10:32.787 "unmap": true, 00:10:32.787 "flush": true, 00:10:32.787 "reset": true, 00:10:32.787 "nvme_admin": false, 00:10:32.787 "nvme_io": false, 00:10:32.787 "nvme_io_md": false, 00:10:32.787 "write_zeroes": true, 00:10:32.787 "zcopy": true, 00:10:32.787 "get_zone_info": false, 00:10:32.787 "zone_management": false, 00:10:32.787 "zone_append": false, 00:10:32.787 "compare": false, 00:10:32.787 "compare_and_write": false, 00:10:32.787 "abort": true, 00:10:32.787 "seek_hole": false, 00:10:32.787 "seek_data": false, 00:10:32.787 "copy": true, 00:10:32.787 "nvme_iov_md": false 00:10:32.787 }, 00:10:32.787 "memory_domains": [ 00:10:32.787 { 00:10:32.787 "dma_device_id": "system", 00:10:32.787 "dma_device_type": 1 00:10:32.787 }, 00:10:32.787 { 00:10:32.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.787 "dma_device_type": 2 00:10:32.787 } 00:10:32.787 ], 00:10:32.787 "driver_specific": {} 00:10:32.787 } 00:10:32.787 ] 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.787 "name": "Existed_Raid", 00:10:32.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.787 "strip_size_kb": 64, 00:10:32.787 "state": "configuring", 00:10:32.787 "raid_level": "raid0", 00:10:32.787 "superblock": false, 00:10:32.787 "num_base_bdevs": 4, 00:10:32.787 "num_base_bdevs_discovered": 2, 00:10:32.787 "num_base_bdevs_operational": 4, 00:10:32.787 "base_bdevs_list": [ 00:10:32.787 { 00:10:32.787 "name": "BaseBdev1", 00:10:32.787 "uuid": "8c8de418-e2e0-4959-a38c-a50050186d51", 00:10:32.787 "is_configured": true, 00:10:32.787 "data_offset": 0, 00:10:32.787 "data_size": 65536 00:10:32.787 }, 00:10:32.787 { 00:10:32.787 "name": "BaseBdev2", 00:10:32.787 "uuid": "bd9d1865-24f6-485e-a676-f20e7a2e6e59", 00:10:32.787 "is_configured": true, 00:10:32.787 "data_offset": 0, 00:10:32.787 "data_size": 65536 00:10:32.787 }, 00:10:32.787 { 00:10:32.787 "name": "BaseBdev3", 00:10:32.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.787 "is_configured": false, 00:10:32.787 "data_offset": 0, 00:10:32.787 "data_size": 0 00:10:32.787 }, 00:10:32.787 { 00:10:32.787 "name": "BaseBdev4", 00:10:32.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.787 "is_configured": false, 00:10:32.787 "data_offset": 0, 00:10:32.787 "data_size": 0 00:10:32.787 } 00:10:32.787 ] 00:10:32.787 }' 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.787 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.358 [2024-11-21 03:19:20.748676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:33.358 BaseBdev3 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.358 [ 00:10:33.358 { 00:10:33.358 "name": "BaseBdev3", 00:10:33.358 "aliases": [ 00:10:33.358 "2bdb5ffe-23cc-42e4-8ed0-48269f4df7b5" 00:10:33.358 ], 00:10:33.358 "product_name": "Malloc disk", 00:10:33.358 "block_size": 512, 00:10:33.358 "num_blocks": 65536, 00:10:33.358 "uuid": "2bdb5ffe-23cc-42e4-8ed0-48269f4df7b5", 00:10:33.358 "assigned_rate_limits": { 00:10:33.358 "rw_ios_per_sec": 0, 00:10:33.358 "rw_mbytes_per_sec": 0, 00:10:33.358 "r_mbytes_per_sec": 0, 00:10:33.358 "w_mbytes_per_sec": 0 00:10:33.358 }, 00:10:33.358 "claimed": true, 00:10:33.358 "claim_type": "exclusive_write", 00:10:33.358 "zoned": false, 00:10:33.358 "supported_io_types": { 00:10:33.358 "read": true, 00:10:33.358 "write": true, 00:10:33.358 "unmap": true, 00:10:33.358 "flush": true, 00:10:33.358 "reset": true, 00:10:33.358 "nvme_admin": false, 00:10:33.358 "nvme_io": false, 00:10:33.358 "nvme_io_md": false, 00:10:33.358 "write_zeroes": true, 00:10:33.358 "zcopy": true, 00:10:33.358 "get_zone_info": false, 00:10:33.358 "zone_management": false, 00:10:33.358 "zone_append": false, 00:10:33.358 "compare": false, 00:10:33.358 "compare_and_write": false, 00:10:33.358 "abort": true, 00:10:33.358 "seek_hole": false, 00:10:33.358 "seek_data": false, 00:10:33.358 "copy": true, 00:10:33.358 "nvme_iov_md": false 00:10:33.358 }, 00:10:33.358 "memory_domains": [ 00:10:33.358 { 00:10:33.358 "dma_device_id": "system", 00:10:33.358 "dma_device_type": 1 00:10:33.358 }, 00:10:33.358 { 00:10:33.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.358 "dma_device_type": 2 00:10:33.358 } 00:10:33.358 ], 00:10:33.358 "driver_specific": {} 00:10:33.358 } 00:10:33.358 ] 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.358 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.358 "name": "Existed_Raid", 00:10:33.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.358 "strip_size_kb": 64, 00:10:33.358 "state": "configuring", 00:10:33.358 "raid_level": "raid0", 00:10:33.358 "superblock": false, 00:10:33.358 "num_base_bdevs": 4, 00:10:33.358 "num_base_bdevs_discovered": 3, 00:10:33.358 "num_base_bdevs_operational": 4, 00:10:33.358 "base_bdevs_list": [ 00:10:33.358 { 00:10:33.358 "name": "BaseBdev1", 00:10:33.358 "uuid": "8c8de418-e2e0-4959-a38c-a50050186d51", 00:10:33.358 "is_configured": true, 00:10:33.358 "data_offset": 0, 00:10:33.358 "data_size": 65536 00:10:33.358 }, 00:10:33.358 { 00:10:33.358 "name": "BaseBdev2", 00:10:33.358 "uuid": "bd9d1865-24f6-485e-a676-f20e7a2e6e59", 00:10:33.358 "is_configured": true, 00:10:33.359 "data_offset": 0, 00:10:33.359 "data_size": 65536 00:10:33.359 }, 00:10:33.359 { 00:10:33.359 "name": "BaseBdev3", 00:10:33.359 "uuid": "2bdb5ffe-23cc-42e4-8ed0-48269f4df7b5", 00:10:33.359 "is_configured": true, 00:10:33.359 "data_offset": 0, 00:10:33.359 "data_size": 65536 00:10:33.359 }, 00:10:33.359 { 00:10:33.359 "name": "BaseBdev4", 00:10:33.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.359 "is_configured": false, 00:10:33.359 "data_offset": 0, 00:10:33.359 "data_size": 0 00:10:33.359 } 00:10:33.359 ] 00:10:33.359 }' 00:10:33.359 03:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.359 03:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.931 [2024-11-21 03:19:21.248175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:33.931 [2024-11-21 03:19:21.248229] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:33.931 [2024-11-21 03:19:21.248242] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:33.931 [2024-11-21 03:19:21.248541] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:10:33.931 [2024-11-21 03:19:21.248710] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:33.931 [2024-11-21 03:19:21.248721] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:10:33.931 [2024-11-21 03:19:21.248956] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.931 BaseBdev4 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.931 [ 00:10:33.931 { 00:10:33.931 "name": "BaseBdev4", 00:10:33.931 "aliases": [ 00:10:33.931 "42f86034-e570-4052-a3d8-9869e85249d0" 00:10:33.931 ], 00:10:33.931 "product_name": "Malloc disk", 00:10:33.931 "block_size": 512, 00:10:33.931 "num_blocks": 65536, 00:10:33.931 "uuid": "42f86034-e570-4052-a3d8-9869e85249d0", 00:10:33.931 "assigned_rate_limits": { 00:10:33.931 "rw_ios_per_sec": 0, 00:10:33.931 "rw_mbytes_per_sec": 0, 00:10:33.931 "r_mbytes_per_sec": 0, 00:10:33.931 "w_mbytes_per_sec": 0 00:10:33.931 }, 00:10:33.931 "claimed": true, 00:10:33.931 "claim_type": "exclusive_write", 00:10:33.931 "zoned": false, 00:10:33.931 "supported_io_types": { 00:10:33.931 "read": true, 00:10:33.931 "write": true, 00:10:33.931 "unmap": true, 00:10:33.931 "flush": true, 00:10:33.931 "reset": true, 00:10:33.931 "nvme_admin": false, 00:10:33.931 "nvme_io": false, 00:10:33.931 "nvme_io_md": false, 00:10:33.931 "write_zeroes": true, 00:10:33.931 "zcopy": true, 00:10:33.931 "get_zone_info": false, 00:10:33.931 "zone_management": false, 00:10:33.931 "zone_append": false, 00:10:33.931 "compare": false, 00:10:33.931 "compare_and_write": false, 00:10:33.931 "abort": true, 00:10:33.931 "seek_hole": false, 00:10:33.931 "seek_data": false, 00:10:33.931 "copy": true, 00:10:33.931 "nvme_iov_md": false 00:10:33.931 }, 00:10:33.931 "memory_domains": [ 00:10:33.931 { 00:10:33.931 "dma_device_id": "system", 00:10:33.931 "dma_device_type": 1 00:10:33.931 }, 00:10:33.931 { 00:10:33.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.931 "dma_device_type": 2 00:10:33.931 } 00:10:33.931 ], 00:10:33.931 "driver_specific": {} 00:10:33.931 } 00:10:33.931 ] 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.931 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.931 "name": "Existed_Raid", 00:10:33.931 "uuid": "b4bae399-ea5c-4a38-a15e-f103edefb7c5", 00:10:33.931 "strip_size_kb": 64, 00:10:33.931 "state": "online", 00:10:33.931 "raid_level": "raid0", 00:10:33.931 "superblock": false, 00:10:33.931 "num_base_bdevs": 4, 00:10:33.931 "num_base_bdevs_discovered": 4, 00:10:33.931 "num_base_bdevs_operational": 4, 00:10:33.931 "base_bdevs_list": [ 00:10:33.931 { 00:10:33.931 "name": "BaseBdev1", 00:10:33.931 "uuid": "8c8de418-e2e0-4959-a38c-a50050186d51", 00:10:33.931 "is_configured": true, 00:10:33.931 "data_offset": 0, 00:10:33.931 "data_size": 65536 00:10:33.931 }, 00:10:33.931 { 00:10:33.931 "name": "BaseBdev2", 00:10:33.932 "uuid": "bd9d1865-24f6-485e-a676-f20e7a2e6e59", 00:10:33.932 "is_configured": true, 00:10:33.932 "data_offset": 0, 00:10:33.932 "data_size": 65536 00:10:33.932 }, 00:10:33.932 { 00:10:33.932 "name": "BaseBdev3", 00:10:33.932 "uuid": "2bdb5ffe-23cc-42e4-8ed0-48269f4df7b5", 00:10:33.932 "is_configured": true, 00:10:33.932 "data_offset": 0, 00:10:33.932 "data_size": 65536 00:10:33.932 }, 00:10:33.932 { 00:10:33.932 "name": "BaseBdev4", 00:10:33.932 "uuid": "42f86034-e570-4052-a3d8-9869e85249d0", 00:10:33.932 "is_configured": true, 00:10:33.932 "data_offset": 0, 00:10:33.932 "data_size": 65536 00:10:33.932 } 00:10:33.932 ] 00:10:33.932 }' 00:10:33.932 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.932 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.192 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:34.192 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:34.192 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:34.192 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:34.192 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:34.192 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:34.192 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:34.192 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:34.192 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.192 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.192 [2024-11-21 03:19:21.728804] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:34.192 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.452 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:34.452 "name": "Existed_Raid", 00:10:34.452 "aliases": [ 00:10:34.452 "b4bae399-ea5c-4a38-a15e-f103edefb7c5" 00:10:34.452 ], 00:10:34.452 "product_name": "Raid Volume", 00:10:34.452 "block_size": 512, 00:10:34.452 "num_blocks": 262144, 00:10:34.452 "uuid": "b4bae399-ea5c-4a38-a15e-f103edefb7c5", 00:10:34.452 "assigned_rate_limits": { 00:10:34.452 "rw_ios_per_sec": 0, 00:10:34.452 "rw_mbytes_per_sec": 0, 00:10:34.452 "r_mbytes_per_sec": 0, 00:10:34.452 "w_mbytes_per_sec": 0 00:10:34.452 }, 00:10:34.452 "claimed": false, 00:10:34.452 "zoned": false, 00:10:34.452 "supported_io_types": { 00:10:34.452 "read": true, 00:10:34.452 "write": true, 00:10:34.452 "unmap": true, 00:10:34.452 "flush": true, 00:10:34.452 "reset": true, 00:10:34.452 "nvme_admin": false, 00:10:34.452 "nvme_io": false, 00:10:34.452 "nvme_io_md": false, 00:10:34.452 "write_zeroes": true, 00:10:34.452 "zcopy": false, 00:10:34.452 "get_zone_info": false, 00:10:34.452 "zone_management": false, 00:10:34.452 "zone_append": false, 00:10:34.452 "compare": false, 00:10:34.452 "compare_and_write": false, 00:10:34.452 "abort": false, 00:10:34.452 "seek_hole": false, 00:10:34.452 "seek_data": false, 00:10:34.452 "copy": false, 00:10:34.452 "nvme_iov_md": false 00:10:34.452 }, 00:10:34.452 "memory_domains": [ 00:10:34.452 { 00:10:34.452 "dma_device_id": "system", 00:10:34.452 "dma_device_type": 1 00:10:34.452 }, 00:10:34.452 { 00:10:34.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.452 "dma_device_type": 2 00:10:34.452 }, 00:10:34.452 { 00:10:34.452 "dma_device_id": "system", 00:10:34.452 "dma_device_type": 1 00:10:34.452 }, 00:10:34.452 { 00:10:34.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.452 "dma_device_type": 2 00:10:34.452 }, 00:10:34.452 { 00:10:34.452 "dma_device_id": "system", 00:10:34.452 "dma_device_type": 1 00:10:34.452 }, 00:10:34.452 { 00:10:34.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.452 "dma_device_type": 2 00:10:34.452 }, 00:10:34.452 { 00:10:34.452 "dma_device_id": "system", 00:10:34.452 "dma_device_type": 1 00:10:34.452 }, 00:10:34.452 { 00:10:34.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.452 "dma_device_type": 2 00:10:34.452 } 00:10:34.452 ], 00:10:34.452 "driver_specific": { 00:10:34.452 "raid": { 00:10:34.452 "uuid": "b4bae399-ea5c-4a38-a15e-f103edefb7c5", 00:10:34.452 "strip_size_kb": 64, 00:10:34.452 "state": "online", 00:10:34.452 "raid_level": "raid0", 00:10:34.452 "superblock": false, 00:10:34.452 "num_base_bdevs": 4, 00:10:34.452 "num_base_bdevs_discovered": 4, 00:10:34.452 "num_base_bdevs_operational": 4, 00:10:34.452 "base_bdevs_list": [ 00:10:34.452 { 00:10:34.452 "name": "BaseBdev1", 00:10:34.452 "uuid": "8c8de418-e2e0-4959-a38c-a50050186d51", 00:10:34.452 "is_configured": true, 00:10:34.452 "data_offset": 0, 00:10:34.452 "data_size": 65536 00:10:34.452 }, 00:10:34.452 { 00:10:34.452 "name": "BaseBdev2", 00:10:34.452 "uuid": "bd9d1865-24f6-485e-a676-f20e7a2e6e59", 00:10:34.452 "is_configured": true, 00:10:34.452 "data_offset": 0, 00:10:34.452 "data_size": 65536 00:10:34.452 }, 00:10:34.452 { 00:10:34.452 "name": "BaseBdev3", 00:10:34.452 "uuid": "2bdb5ffe-23cc-42e4-8ed0-48269f4df7b5", 00:10:34.452 "is_configured": true, 00:10:34.452 "data_offset": 0, 00:10:34.452 "data_size": 65536 00:10:34.452 }, 00:10:34.452 { 00:10:34.452 "name": "BaseBdev4", 00:10:34.452 "uuid": "42f86034-e570-4052-a3d8-9869e85249d0", 00:10:34.452 "is_configured": true, 00:10:34.452 "data_offset": 0, 00:10:34.452 "data_size": 65536 00:10:34.452 } 00:10:34.452 ] 00:10:34.452 } 00:10:34.452 } 00:10:34.452 }' 00:10:34.452 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:34.452 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:34.452 BaseBdev2 00:10:34.452 BaseBdev3 00:10:34.452 BaseBdev4' 00:10:34.452 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.452 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:34.452 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.452 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:34.452 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.452 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.452 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.452 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.452 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.452 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.452 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.452 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:34.452 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.452 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.452 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.452 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.452 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.452 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.452 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.452 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:34.452 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.452 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.452 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.452 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.452 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.452 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.452 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.452 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.452 03:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:34.452 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.452 03:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.452 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.713 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.713 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.713 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:34.713 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.713 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.713 [2024-11-21 03:19:22.024592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:34.713 [2024-11-21 03:19:22.024723] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:34.713 [2024-11-21 03:19:22.024826] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:34.713 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.713 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:34.713 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:34.713 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:34.713 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:34.713 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:34.713 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:34.713 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.713 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:34.713 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.713 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.713 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:34.713 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.713 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.713 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.713 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.713 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.713 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.713 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.713 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.713 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.713 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.713 "name": "Existed_Raid", 00:10:34.713 "uuid": "b4bae399-ea5c-4a38-a15e-f103edefb7c5", 00:10:34.713 "strip_size_kb": 64, 00:10:34.713 "state": "offline", 00:10:34.713 "raid_level": "raid0", 00:10:34.713 "superblock": false, 00:10:34.713 "num_base_bdevs": 4, 00:10:34.713 "num_base_bdevs_discovered": 3, 00:10:34.713 "num_base_bdevs_operational": 3, 00:10:34.713 "base_bdevs_list": [ 00:10:34.713 { 00:10:34.713 "name": null, 00:10:34.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.713 "is_configured": false, 00:10:34.713 "data_offset": 0, 00:10:34.713 "data_size": 65536 00:10:34.713 }, 00:10:34.713 { 00:10:34.713 "name": "BaseBdev2", 00:10:34.713 "uuid": "bd9d1865-24f6-485e-a676-f20e7a2e6e59", 00:10:34.713 "is_configured": true, 00:10:34.713 "data_offset": 0, 00:10:34.713 "data_size": 65536 00:10:34.713 }, 00:10:34.713 { 00:10:34.713 "name": "BaseBdev3", 00:10:34.713 "uuid": "2bdb5ffe-23cc-42e4-8ed0-48269f4df7b5", 00:10:34.713 "is_configured": true, 00:10:34.713 "data_offset": 0, 00:10:34.713 "data_size": 65536 00:10:34.713 }, 00:10:34.713 { 00:10:34.713 "name": "BaseBdev4", 00:10:34.713 "uuid": "42f86034-e570-4052-a3d8-9869e85249d0", 00:10:34.713 "is_configured": true, 00:10:34.713 "data_offset": 0, 00:10:34.713 "data_size": 65536 00:10:34.713 } 00:10:34.713 ] 00:10:34.713 }' 00:10:34.713 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.714 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.974 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:34.974 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:34.974 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.974 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.974 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.974 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:34.974 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.974 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:34.974 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:34.974 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:34.974 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.974 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.974 [2024-11-21 03:19:22.524992] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:35.234 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.234 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:35.234 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.234 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:35.234 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.234 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.234 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.234 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.234 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:35.234 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:35.234 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:35.234 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.234 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.234 [2024-11-21 03:19:22.584784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:35.234 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.234 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:35.234 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.234 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.234 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:35.234 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.234 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.234 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.234 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:35.234 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.235 [2024-11-21 03:19:22.656590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:35.235 [2024-11-21 03:19:22.656671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.235 BaseBdev2 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.235 [ 00:10:35.235 { 00:10:35.235 "name": "BaseBdev2", 00:10:35.235 "aliases": [ 00:10:35.235 "8c9c7594-8fc6-4a3d-8345-1e8e6dba163f" 00:10:35.235 ], 00:10:35.235 "product_name": "Malloc disk", 00:10:35.235 "block_size": 512, 00:10:35.235 "num_blocks": 65536, 00:10:35.235 "uuid": "8c9c7594-8fc6-4a3d-8345-1e8e6dba163f", 00:10:35.235 "assigned_rate_limits": { 00:10:35.235 "rw_ios_per_sec": 0, 00:10:35.235 "rw_mbytes_per_sec": 0, 00:10:35.235 "r_mbytes_per_sec": 0, 00:10:35.235 "w_mbytes_per_sec": 0 00:10:35.235 }, 00:10:35.235 "claimed": false, 00:10:35.235 "zoned": false, 00:10:35.235 "supported_io_types": { 00:10:35.235 "read": true, 00:10:35.235 "write": true, 00:10:35.235 "unmap": true, 00:10:35.235 "flush": true, 00:10:35.235 "reset": true, 00:10:35.235 "nvme_admin": false, 00:10:35.235 "nvme_io": false, 00:10:35.235 "nvme_io_md": false, 00:10:35.235 "write_zeroes": true, 00:10:35.235 "zcopy": true, 00:10:35.235 "get_zone_info": false, 00:10:35.235 "zone_management": false, 00:10:35.235 "zone_append": false, 00:10:35.235 "compare": false, 00:10:35.235 "compare_and_write": false, 00:10:35.235 "abort": true, 00:10:35.235 "seek_hole": false, 00:10:35.235 "seek_data": false, 00:10:35.235 "copy": true, 00:10:35.235 "nvme_iov_md": false 00:10:35.235 }, 00:10:35.235 "memory_domains": [ 00:10:35.235 { 00:10:35.235 "dma_device_id": "system", 00:10:35.235 "dma_device_type": 1 00:10:35.235 }, 00:10:35.235 { 00:10:35.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.235 "dma_device_type": 2 00:10:35.235 } 00:10:35.235 ], 00:10:35.235 "driver_specific": {} 00:10:35.235 } 00:10:35.235 ] 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.235 BaseBdev3 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.235 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.496 [ 00:10:35.496 { 00:10:35.496 "name": "BaseBdev3", 00:10:35.496 "aliases": [ 00:10:35.496 "cba89cb1-d54c-45aa-a949-6b7fd8fabc22" 00:10:35.496 ], 00:10:35.496 "product_name": "Malloc disk", 00:10:35.496 "block_size": 512, 00:10:35.496 "num_blocks": 65536, 00:10:35.496 "uuid": "cba89cb1-d54c-45aa-a949-6b7fd8fabc22", 00:10:35.496 "assigned_rate_limits": { 00:10:35.496 "rw_ios_per_sec": 0, 00:10:35.496 "rw_mbytes_per_sec": 0, 00:10:35.496 "r_mbytes_per_sec": 0, 00:10:35.496 "w_mbytes_per_sec": 0 00:10:35.496 }, 00:10:35.496 "claimed": false, 00:10:35.496 "zoned": false, 00:10:35.496 "supported_io_types": { 00:10:35.496 "read": true, 00:10:35.496 "write": true, 00:10:35.496 "unmap": true, 00:10:35.496 "flush": true, 00:10:35.496 "reset": true, 00:10:35.496 "nvme_admin": false, 00:10:35.496 "nvme_io": false, 00:10:35.496 "nvme_io_md": false, 00:10:35.496 "write_zeroes": true, 00:10:35.496 "zcopy": true, 00:10:35.496 "get_zone_info": false, 00:10:35.496 "zone_management": false, 00:10:35.496 "zone_append": false, 00:10:35.496 "compare": false, 00:10:35.496 "compare_and_write": false, 00:10:35.496 "abort": true, 00:10:35.496 "seek_hole": false, 00:10:35.496 "seek_data": false, 00:10:35.496 "copy": true, 00:10:35.496 "nvme_iov_md": false 00:10:35.496 }, 00:10:35.496 "memory_domains": [ 00:10:35.496 { 00:10:35.496 "dma_device_id": "system", 00:10:35.496 "dma_device_type": 1 00:10:35.496 }, 00:10:35.496 { 00:10:35.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.496 "dma_device_type": 2 00:10:35.496 } 00:10:35.496 ], 00:10:35.496 "driver_specific": {} 00:10:35.496 } 00:10:35.496 ] 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.496 BaseBdev4 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.496 [ 00:10:35.496 { 00:10:35.496 "name": "BaseBdev4", 00:10:35.496 "aliases": [ 00:10:35.496 "46dec9bc-9458-4c4e-b9ce-5add040684f6" 00:10:35.496 ], 00:10:35.496 "product_name": "Malloc disk", 00:10:35.496 "block_size": 512, 00:10:35.496 "num_blocks": 65536, 00:10:35.496 "uuid": "46dec9bc-9458-4c4e-b9ce-5add040684f6", 00:10:35.496 "assigned_rate_limits": { 00:10:35.496 "rw_ios_per_sec": 0, 00:10:35.496 "rw_mbytes_per_sec": 0, 00:10:35.496 "r_mbytes_per_sec": 0, 00:10:35.496 "w_mbytes_per_sec": 0 00:10:35.496 }, 00:10:35.496 "claimed": false, 00:10:35.496 "zoned": false, 00:10:35.496 "supported_io_types": { 00:10:35.496 "read": true, 00:10:35.496 "write": true, 00:10:35.496 "unmap": true, 00:10:35.496 "flush": true, 00:10:35.496 "reset": true, 00:10:35.496 "nvme_admin": false, 00:10:35.496 "nvme_io": false, 00:10:35.496 "nvme_io_md": false, 00:10:35.496 "write_zeroes": true, 00:10:35.496 "zcopy": true, 00:10:35.496 "get_zone_info": false, 00:10:35.496 "zone_management": false, 00:10:35.496 "zone_append": false, 00:10:35.496 "compare": false, 00:10:35.496 "compare_and_write": false, 00:10:35.496 "abort": true, 00:10:35.496 "seek_hole": false, 00:10:35.496 "seek_data": false, 00:10:35.496 "copy": true, 00:10:35.496 "nvme_iov_md": false 00:10:35.496 }, 00:10:35.496 "memory_domains": [ 00:10:35.496 { 00:10:35.496 "dma_device_id": "system", 00:10:35.496 "dma_device_type": 1 00:10:35.496 }, 00:10:35.496 { 00:10:35.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.496 "dma_device_type": 2 00:10:35.496 } 00:10:35.496 ], 00:10:35.496 "driver_specific": {} 00:10:35.496 } 00:10:35.496 ] 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.496 [2024-11-21 03:19:22.879941] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:35.496 [2024-11-21 03:19:22.880113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:35.496 [2024-11-21 03:19:22.880161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:35.496 [2024-11-21 03:19:22.882209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:35.496 [2024-11-21 03:19:22.882319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.496 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.496 "name": "Existed_Raid", 00:10:35.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.496 "strip_size_kb": 64, 00:10:35.496 "state": "configuring", 00:10:35.496 "raid_level": "raid0", 00:10:35.496 "superblock": false, 00:10:35.496 "num_base_bdevs": 4, 00:10:35.496 "num_base_bdevs_discovered": 3, 00:10:35.496 "num_base_bdevs_operational": 4, 00:10:35.496 "base_bdevs_list": [ 00:10:35.496 { 00:10:35.496 "name": "BaseBdev1", 00:10:35.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.496 "is_configured": false, 00:10:35.496 "data_offset": 0, 00:10:35.496 "data_size": 0 00:10:35.496 }, 00:10:35.496 { 00:10:35.496 "name": "BaseBdev2", 00:10:35.497 "uuid": "8c9c7594-8fc6-4a3d-8345-1e8e6dba163f", 00:10:35.497 "is_configured": true, 00:10:35.497 "data_offset": 0, 00:10:35.497 "data_size": 65536 00:10:35.497 }, 00:10:35.497 { 00:10:35.497 "name": "BaseBdev3", 00:10:35.497 "uuid": "cba89cb1-d54c-45aa-a949-6b7fd8fabc22", 00:10:35.497 "is_configured": true, 00:10:35.497 "data_offset": 0, 00:10:35.497 "data_size": 65536 00:10:35.497 }, 00:10:35.497 { 00:10:35.497 "name": "BaseBdev4", 00:10:35.497 "uuid": "46dec9bc-9458-4c4e-b9ce-5add040684f6", 00:10:35.497 "is_configured": true, 00:10:35.497 "data_offset": 0, 00:10:35.497 "data_size": 65536 00:10:35.497 } 00:10:35.497 ] 00:10:35.497 }' 00:10:35.497 03:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.497 03:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.066 03:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:36.066 03:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.066 03:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.066 [2024-11-21 03:19:23.344040] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:36.066 03:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.066 03:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:36.066 03:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.066 03:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.066 03:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:36.066 03:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.066 03:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.066 03:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.066 03:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.066 03:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.066 03:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.066 03:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.066 03:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.066 03:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.066 03:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.066 03:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.067 03:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.067 "name": "Existed_Raid", 00:10:36.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.067 "strip_size_kb": 64, 00:10:36.067 "state": "configuring", 00:10:36.067 "raid_level": "raid0", 00:10:36.067 "superblock": false, 00:10:36.067 "num_base_bdevs": 4, 00:10:36.067 "num_base_bdevs_discovered": 2, 00:10:36.067 "num_base_bdevs_operational": 4, 00:10:36.067 "base_bdevs_list": [ 00:10:36.067 { 00:10:36.067 "name": "BaseBdev1", 00:10:36.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.067 "is_configured": false, 00:10:36.067 "data_offset": 0, 00:10:36.067 "data_size": 0 00:10:36.067 }, 00:10:36.067 { 00:10:36.067 "name": null, 00:10:36.067 "uuid": "8c9c7594-8fc6-4a3d-8345-1e8e6dba163f", 00:10:36.067 "is_configured": false, 00:10:36.067 "data_offset": 0, 00:10:36.067 "data_size": 65536 00:10:36.067 }, 00:10:36.067 { 00:10:36.067 "name": "BaseBdev3", 00:10:36.067 "uuid": "cba89cb1-d54c-45aa-a949-6b7fd8fabc22", 00:10:36.067 "is_configured": true, 00:10:36.067 "data_offset": 0, 00:10:36.067 "data_size": 65536 00:10:36.067 }, 00:10:36.067 { 00:10:36.067 "name": "BaseBdev4", 00:10:36.067 "uuid": "46dec9bc-9458-4c4e-b9ce-5add040684f6", 00:10:36.067 "is_configured": true, 00:10:36.067 "data_offset": 0, 00:10:36.067 "data_size": 65536 00:10:36.067 } 00:10:36.067 ] 00:10:36.067 }' 00:10:36.067 03:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.067 03:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.327 03:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.327 03:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:36.327 03:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.327 03:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.327 03:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.327 03:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:36.327 03:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:36.327 03:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.327 03:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.327 [2024-11-21 03:19:23.859499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:36.327 BaseBdev1 00:10:36.327 03:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.327 03:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:36.328 03:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:36.328 03:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:36.328 03:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:36.328 03:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:36.328 03:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:36.328 03:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:36.328 03:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.328 03:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.328 03:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.328 03:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:36.328 03:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.328 03:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.328 [ 00:10:36.328 { 00:10:36.328 "name": "BaseBdev1", 00:10:36.328 "aliases": [ 00:10:36.328 "730ae29a-82d0-4984-9fd9-42103667cab3" 00:10:36.328 ], 00:10:36.328 "product_name": "Malloc disk", 00:10:36.328 "block_size": 512, 00:10:36.328 "num_blocks": 65536, 00:10:36.328 "uuid": "730ae29a-82d0-4984-9fd9-42103667cab3", 00:10:36.328 "assigned_rate_limits": { 00:10:36.328 "rw_ios_per_sec": 0, 00:10:36.328 "rw_mbytes_per_sec": 0, 00:10:36.328 "r_mbytes_per_sec": 0, 00:10:36.328 "w_mbytes_per_sec": 0 00:10:36.328 }, 00:10:36.328 "claimed": true, 00:10:36.328 "claim_type": "exclusive_write", 00:10:36.328 "zoned": false, 00:10:36.328 "supported_io_types": { 00:10:36.328 "read": true, 00:10:36.328 "write": true, 00:10:36.328 "unmap": true, 00:10:36.328 "flush": true, 00:10:36.328 "reset": true, 00:10:36.328 "nvme_admin": false, 00:10:36.328 "nvme_io": false, 00:10:36.328 "nvme_io_md": false, 00:10:36.328 "write_zeroes": true, 00:10:36.589 "zcopy": true, 00:10:36.589 "get_zone_info": false, 00:10:36.589 "zone_management": false, 00:10:36.589 "zone_append": false, 00:10:36.589 "compare": false, 00:10:36.589 "compare_and_write": false, 00:10:36.589 "abort": true, 00:10:36.589 "seek_hole": false, 00:10:36.589 "seek_data": false, 00:10:36.589 "copy": true, 00:10:36.589 "nvme_iov_md": false 00:10:36.589 }, 00:10:36.589 "memory_domains": [ 00:10:36.589 { 00:10:36.589 "dma_device_id": "system", 00:10:36.589 "dma_device_type": 1 00:10:36.589 }, 00:10:36.589 { 00:10:36.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.589 "dma_device_type": 2 00:10:36.589 } 00:10:36.589 ], 00:10:36.589 "driver_specific": {} 00:10:36.589 } 00:10:36.589 ] 00:10:36.589 03:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.589 03:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:36.589 03:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:36.589 03:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.589 03:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.589 03:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:36.589 03:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.589 03:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.589 03:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.589 03:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.589 03:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.589 03:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.589 03:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.589 03:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.589 03:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.589 03:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.589 03:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.589 03:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.589 "name": "Existed_Raid", 00:10:36.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.589 "strip_size_kb": 64, 00:10:36.589 "state": "configuring", 00:10:36.589 "raid_level": "raid0", 00:10:36.589 "superblock": false, 00:10:36.589 "num_base_bdevs": 4, 00:10:36.589 "num_base_bdevs_discovered": 3, 00:10:36.589 "num_base_bdevs_operational": 4, 00:10:36.589 "base_bdevs_list": [ 00:10:36.589 { 00:10:36.589 "name": "BaseBdev1", 00:10:36.589 "uuid": "730ae29a-82d0-4984-9fd9-42103667cab3", 00:10:36.589 "is_configured": true, 00:10:36.589 "data_offset": 0, 00:10:36.589 "data_size": 65536 00:10:36.589 }, 00:10:36.589 { 00:10:36.589 "name": null, 00:10:36.589 "uuid": "8c9c7594-8fc6-4a3d-8345-1e8e6dba163f", 00:10:36.589 "is_configured": false, 00:10:36.589 "data_offset": 0, 00:10:36.589 "data_size": 65536 00:10:36.589 }, 00:10:36.589 { 00:10:36.589 "name": "BaseBdev3", 00:10:36.589 "uuid": "cba89cb1-d54c-45aa-a949-6b7fd8fabc22", 00:10:36.589 "is_configured": true, 00:10:36.589 "data_offset": 0, 00:10:36.589 "data_size": 65536 00:10:36.589 }, 00:10:36.589 { 00:10:36.589 "name": "BaseBdev4", 00:10:36.589 "uuid": "46dec9bc-9458-4c4e-b9ce-5add040684f6", 00:10:36.589 "is_configured": true, 00:10:36.589 "data_offset": 0, 00:10:36.589 "data_size": 65536 00:10:36.589 } 00:10:36.589 ] 00:10:36.589 }' 00:10:36.589 03:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.589 03:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.849 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:36.849 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.849 03:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.849 03:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.849 03:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.849 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:36.849 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:36.849 03:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.849 03:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.849 [2024-11-21 03:19:24.387755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:36.849 03:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.849 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:36.849 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.849 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.849 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:36.849 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.849 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.849 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.849 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.849 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.849 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.849 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.849 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.849 03:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.849 03:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.849 03:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.109 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.109 "name": "Existed_Raid", 00:10:37.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.109 "strip_size_kb": 64, 00:10:37.109 "state": "configuring", 00:10:37.109 "raid_level": "raid0", 00:10:37.109 "superblock": false, 00:10:37.109 "num_base_bdevs": 4, 00:10:37.109 "num_base_bdevs_discovered": 2, 00:10:37.109 "num_base_bdevs_operational": 4, 00:10:37.109 "base_bdevs_list": [ 00:10:37.109 { 00:10:37.109 "name": "BaseBdev1", 00:10:37.109 "uuid": "730ae29a-82d0-4984-9fd9-42103667cab3", 00:10:37.110 "is_configured": true, 00:10:37.110 "data_offset": 0, 00:10:37.110 "data_size": 65536 00:10:37.110 }, 00:10:37.110 { 00:10:37.110 "name": null, 00:10:37.110 "uuid": "8c9c7594-8fc6-4a3d-8345-1e8e6dba163f", 00:10:37.110 "is_configured": false, 00:10:37.110 "data_offset": 0, 00:10:37.110 "data_size": 65536 00:10:37.110 }, 00:10:37.110 { 00:10:37.110 "name": null, 00:10:37.110 "uuid": "cba89cb1-d54c-45aa-a949-6b7fd8fabc22", 00:10:37.110 "is_configured": false, 00:10:37.110 "data_offset": 0, 00:10:37.110 "data_size": 65536 00:10:37.110 }, 00:10:37.110 { 00:10:37.110 "name": "BaseBdev4", 00:10:37.110 "uuid": "46dec9bc-9458-4c4e-b9ce-5add040684f6", 00:10:37.110 "is_configured": true, 00:10:37.110 "data_offset": 0, 00:10:37.110 "data_size": 65536 00:10:37.110 } 00:10:37.110 ] 00:10:37.110 }' 00:10:37.110 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.110 03:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.370 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.370 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:37.370 03:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.370 03:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.370 03:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.370 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:37.370 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:37.370 03:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.370 03:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.370 [2024-11-21 03:19:24.863978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:37.370 03:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.370 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:37.370 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.370 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.370 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:37.370 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.370 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.370 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.370 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.370 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.370 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.370 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.370 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.370 03:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.370 03:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.370 03:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.370 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.370 "name": "Existed_Raid", 00:10:37.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.370 "strip_size_kb": 64, 00:10:37.370 "state": "configuring", 00:10:37.370 "raid_level": "raid0", 00:10:37.370 "superblock": false, 00:10:37.370 "num_base_bdevs": 4, 00:10:37.370 "num_base_bdevs_discovered": 3, 00:10:37.370 "num_base_bdevs_operational": 4, 00:10:37.370 "base_bdevs_list": [ 00:10:37.370 { 00:10:37.370 "name": "BaseBdev1", 00:10:37.370 "uuid": "730ae29a-82d0-4984-9fd9-42103667cab3", 00:10:37.370 "is_configured": true, 00:10:37.370 "data_offset": 0, 00:10:37.370 "data_size": 65536 00:10:37.370 }, 00:10:37.370 { 00:10:37.370 "name": null, 00:10:37.370 "uuid": "8c9c7594-8fc6-4a3d-8345-1e8e6dba163f", 00:10:37.370 "is_configured": false, 00:10:37.370 "data_offset": 0, 00:10:37.370 "data_size": 65536 00:10:37.370 }, 00:10:37.370 { 00:10:37.370 "name": "BaseBdev3", 00:10:37.370 "uuid": "cba89cb1-d54c-45aa-a949-6b7fd8fabc22", 00:10:37.370 "is_configured": true, 00:10:37.370 "data_offset": 0, 00:10:37.370 "data_size": 65536 00:10:37.370 }, 00:10:37.370 { 00:10:37.370 "name": "BaseBdev4", 00:10:37.370 "uuid": "46dec9bc-9458-4c4e-b9ce-5add040684f6", 00:10:37.370 "is_configured": true, 00:10:37.370 "data_offset": 0, 00:10:37.370 "data_size": 65536 00:10:37.370 } 00:10:37.370 ] 00:10:37.370 }' 00:10:37.370 03:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.370 03:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.938 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.939 03:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.939 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:37.939 03:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.939 03:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.939 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:37.939 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:37.939 03:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.939 03:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.939 [2024-11-21 03:19:25.356161] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:37.939 03:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.939 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:37.939 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.939 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.939 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:37.939 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.939 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.939 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.939 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.939 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.939 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.939 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.939 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.939 03:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.939 03:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.939 03:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.939 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.939 "name": "Existed_Raid", 00:10:37.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.939 "strip_size_kb": 64, 00:10:37.939 "state": "configuring", 00:10:37.939 "raid_level": "raid0", 00:10:37.939 "superblock": false, 00:10:37.939 "num_base_bdevs": 4, 00:10:37.939 "num_base_bdevs_discovered": 2, 00:10:37.939 "num_base_bdevs_operational": 4, 00:10:37.939 "base_bdevs_list": [ 00:10:37.939 { 00:10:37.939 "name": null, 00:10:37.939 "uuid": "730ae29a-82d0-4984-9fd9-42103667cab3", 00:10:37.939 "is_configured": false, 00:10:37.939 "data_offset": 0, 00:10:37.939 "data_size": 65536 00:10:37.939 }, 00:10:37.939 { 00:10:37.939 "name": null, 00:10:37.939 "uuid": "8c9c7594-8fc6-4a3d-8345-1e8e6dba163f", 00:10:37.939 "is_configured": false, 00:10:37.939 "data_offset": 0, 00:10:37.939 "data_size": 65536 00:10:37.939 }, 00:10:37.939 { 00:10:37.939 "name": "BaseBdev3", 00:10:37.939 "uuid": "cba89cb1-d54c-45aa-a949-6b7fd8fabc22", 00:10:37.939 "is_configured": true, 00:10:37.939 "data_offset": 0, 00:10:37.939 "data_size": 65536 00:10:37.939 }, 00:10:37.939 { 00:10:37.939 "name": "BaseBdev4", 00:10:37.939 "uuid": "46dec9bc-9458-4c4e-b9ce-5add040684f6", 00:10:37.939 "is_configured": true, 00:10:37.939 "data_offset": 0, 00:10:37.939 "data_size": 65536 00:10:37.939 } 00:10:37.939 ] 00:10:37.939 }' 00:10:37.939 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.939 03:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.534 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.535 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:38.535 03:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.535 03:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.535 03:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.535 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:38.535 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:38.535 03:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.535 03:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.535 [2024-11-21 03:19:25.839195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:38.535 03:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.535 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:38.535 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.535 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.535 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.535 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.535 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.535 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.535 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.535 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.535 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.535 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.535 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.535 03:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.535 03:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.535 03:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.535 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.535 "name": "Existed_Raid", 00:10:38.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.535 "strip_size_kb": 64, 00:10:38.535 "state": "configuring", 00:10:38.535 "raid_level": "raid0", 00:10:38.535 "superblock": false, 00:10:38.535 "num_base_bdevs": 4, 00:10:38.535 "num_base_bdevs_discovered": 3, 00:10:38.535 "num_base_bdevs_operational": 4, 00:10:38.535 "base_bdevs_list": [ 00:10:38.535 { 00:10:38.535 "name": null, 00:10:38.535 "uuid": "730ae29a-82d0-4984-9fd9-42103667cab3", 00:10:38.535 "is_configured": false, 00:10:38.535 "data_offset": 0, 00:10:38.535 "data_size": 65536 00:10:38.535 }, 00:10:38.535 { 00:10:38.535 "name": "BaseBdev2", 00:10:38.535 "uuid": "8c9c7594-8fc6-4a3d-8345-1e8e6dba163f", 00:10:38.535 "is_configured": true, 00:10:38.535 "data_offset": 0, 00:10:38.535 "data_size": 65536 00:10:38.535 }, 00:10:38.535 { 00:10:38.535 "name": "BaseBdev3", 00:10:38.535 "uuid": "cba89cb1-d54c-45aa-a949-6b7fd8fabc22", 00:10:38.535 "is_configured": true, 00:10:38.535 "data_offset": 0, 00:10:38.535 "data_size": 65536 00:10:38.535 }, 00:10:38.535 { 00:10:38.535 "name": "BaseBdev4", 00:10:38.535 "uuid": "46dec9bc-9458-4c4e-b9ce-5add040684f6", 00:10:38.535 "is_configured": true, 00:10:38.535 "data_offset": 0, 00:10:38.535 "data_size": 65536 00:10:38.535 } 00:10:38.535 ] 00:10:38.535 }' 00:10:38.535 03:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.535 03:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.794 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.794 03:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.794 03:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.794 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:38.794 03:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.794 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:38.794 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:38.794 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.794 03:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.794 03:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.794 03:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.052 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 730ae29a-82d0-4984-9fd9-42103667cab3 00:10:39.052 03:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.052 03:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.052 [2024-11-21 03:19:26.394549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:39.052 [2024-11-21 03:19:26.394696] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:39.052 [2024-11-21 03:19:26.394731] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:39.052 [2024-11-21 03:19:26.395048] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:10:39.052 [2024-11-21 03:19:26.395224] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:39.052 [2024-11-21 03:19:26.395271] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:39.052 [2024-11-21 03:19:26.395505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.052 NewBaseBdev 00:10:39.052 03:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.052 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:39.052 03:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:39.052 03:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.052 03:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:39.052 03:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.052 03:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.052 03:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:39.052 03:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.052 03:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.052 03:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.052 03:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:39.052 03:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.052 03:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.052 [ 00:10:39.052 { 00:10:39.052 "name": "NewBaseBdev", 00:10:39.052 "aliases": [ 00:10:39.052 "730ae29a-82d0-4984-9fd9-42103667cab3" 00:10:39.052 ], 00:10:39.052 "product_name": "Malloc disk", 00:10:39.052 "block_size": 512, 00:10:39.052 "num_blocks": 65536, 00:10:39.052 "uuid": "730ae29a-82d0-4984-9fd9-42103667cab3", 00:10:39.052 "assigned_rate_limits": { 00:10:39.052 "rw_ios_per_sec": 0, 00:10:39.052 "rw_mbytes_per_sec": 0, 00:10:39.052 "r_mbytes_per_sec": 0, 00:10:39.052 "w_mbytes_per_sec": 0 00:10:39.052 }, 00:10:39.052 "claimed": true, 00:10:39.053 "claim_type": "exclusive_write", 00:10:39.053 "zoned": false, 00:10:39.053 "supported_io_types": { 00:10:39.053 "read": true, 00:10:39.053 "write": true, 00:10:39.053 "unmap": true, 00:10:39.053 "flush": true, 00:10:39.053 "reset": true, 00:10:39.053 "nvme_admin": false, 00:10:39.053 "nvme_io": false, 00:10:39.053 "nvme_io_md": false, 00:10:39.053 "write_zeroes": true, 00:10:39.053 "zcopy": true, 00:10:39.053 "get_zone_info": false, 00:10:39.053 "zone_management": false, 00:10:39.053 "zone_append": false, 00:10:39.053 "compare": false, 00:10:39.053 "compare_and_write": false, 00:10:39.053 "abort": true, 00:10:39.053 "seek_hole": false, 00:10:39.053 "seek_data": false, 00:10:39.053 "copy": true, 00:10:39.053 "nvme_iov_md": false 00:10:39.053 }, 00:10:39.053 "memory_domains": [ 00:10:39.053 { 00:10:39.053 "dma_device_id": "system", 00:10:39.053 "dma_device_type": 1 00:10:39.053 }, 00:10:39.053 { 00:10:39.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.053 "dma_device_type": 2 00:10:39.053 } 00:10:39.053 ], 00:10:39.053 "driver_specific": {} 00:10:39.053 } 00:10:39.053 ] 00:10:39.053 03:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.053 03:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:39.053 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:39.053 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.053 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.053 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.053 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.053 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.053 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.053 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.053 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.053 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.053 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.053 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.053 03:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.053 03:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.053 03:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.053 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.053 "name": "Existed_Raid", 00:10:39.053 "uuid": "3d73cc14-d1c3-4c57-8d14-2fce06870c59", 00:10:39.053 "strip_size_kb": 64, 00:10:39.053 "state": "online", 00:10:39.053 "raid_level": "raid0", 00:10:39.053 "superblock": false, 00:10:39.053 "num_base_bdevs": 4, 00:10:39.053 "num_base_bdevs_discovered": 4, 00:10:39.053 "num_base_bdevs_operational": 4, 00:10:39.053 "base_bdevs_list": [ 00:10:39.053 { 00:10:39.053 "name": "NewBaseBdev", 00:10:39.053 "uuid": "730ae29a-82d0-4984-9fd9-42103667cab3", 00:10:39.053 "is_configured": true, 00:10:39.053 "data_offset": 0, 00:10:39.053 "data_size": 65536 00:10:39.053 }, 00:10:39.053 { 00:10:39.053 "name": "BaseBdev2", 00:10:39.053 "uuid": "8c9c7594-8fc6-4a3d-8345-1e8e6dba163f", 00:10:39.053 "is_configured": true, 00:10:39.053 "data_offset": 0, 00:10:39.053 "data_size": 65536 00:10:39.053 }, 00:10:39.053 { 00:10:39.053 "name": "BaseBdev3", 00:10:39.053 "uuid": "cba89cb1-d54c-45aa-a949-6b7fd8fabc22", 00:10:39.053 "is_configured": true, 00:10:39.053 "data_offset": 0, 00:10:39.053 "data_size": 65536 00:10:39.053 }, 00:10:39.053 { 00:10:39.053 "name": "BaseBdev4", 00:10:39.053 "uuid": "46dec9bc-9458-4c4e-b9ce-5add040684f6", 00:10:39.053 "is_configured": true, 00:10:39.053 "data_offset": 0, 00:10:39.053 "data_size": 65536 00:10:39.053 } 00:10:39.053 ] 00:10:39.053 }' 00:10:39.053 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.053 03:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.312 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:39.312 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:39.312 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:39.312 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:39.312 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:39.312 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:39.312 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:39.312 03:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.312 03:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.312 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:39.312 [2024-11-21 03:19:26.871168] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:39.572 03:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.572 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:39.572 "name": "Existed_Raid", 00:10:39.572 "aliases": [ 00:10:39.572 "3d73cc14-d1c3-4c57-8d14-2fce06870c59" 00:10:39.572 ], 00:10:39.572 "product_name": "Raid Volume", 00:10:39.572 "block_size": 512, 00:10:39.572 "num_blocks": 262144, 00:10:39.572 "uuid": "3d73cc14-d1c3-4c57-8d14-2fce06870c59", 00:10:39.572 "assigned_rate_limits": { 00:10:39.572 "rw_ios_per_sec": 0, 00:10:39.572 "rw_mbytes_per_sec": 0, 00:10:39.572 "r_mbytes_per_sec": 0, 00:10:39.572 "w_mbytes_per_sec": 0 00:10:39.572 }, 00:10:39.572 "claimed": false, 00:10:39.572 "zoned": false, 00:10:39.572 "supported_io_types": { 00:10:39.572 "read": true, 00:10:39.572 "write": true, 00:10:39.572 "unmap": true, 00:10:39.572 "flush": true, 00:10:39.572 "reset": true, 00:10:39.572 "nvme_admin": false, 00:10:39.572 "nvme_io": false, 00:10:39.572 "nvme_io_md": false, 00:10:39.572 "write_zeroes": true, 00:10:39.572 "zcopy": false, 00:10:39.572 "get_zone_info": false, 00:10:39.572 "zone_management": false, 00:10:39.572 "zone_append": false, 00:10:39.572 "compare": false, 00:10:39.572 "compare_and_write": false, 00:10:39.572 "abort": false, 00:10:39.572 "seek_hole": false, 00:10:39.572 "seek_data": false, 00:10:39.572 "copy": false, 00:10:39.572 "nvme_iov_md": false 00:10:39.572 }, 00:10:39.572 "memory_domains": [ 00:10:39.572 { 00:10:39.572 "dma_device_id": "system", 00:10:39.572 "dma_device_type": 1 00:10:39.572 }, 00:10:39.572 { 00:10:39.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.572 "dma_device_type": 2 00:10:39.572 }, 00:10:39.572 { 00:10:39.572 "dma_device_id": "system", 00:10:39.572 "dma_device_type": 1 00:10:39.572 }, 00:10:39.572 { 00:10:39.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.572 "dma_device_type": 2 00:10:39.572 }, 00:10:39.572 { 00:10:39.572 "dma_device_id": "system", 00:10:39.572 "dma_device_type": 1 00:10:39.572 }, 00:10:39.572 { 00:10:39.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.572 "dma_device_type": 2 00:10:39.572 }, 00:10:39.572 { 00:10:39.572 "dma_device_id": "system", 00:10:39.572 "dma_device_type": 1 00:10:39.572 }, 00:10:39.572 { 00:10:39.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.572 "dma_device_type": 2 00:10:39.572 } 00:10:39.572 ], 00:10:39.572 "driver_specific": { 00:10:39.572 "raid": { 00:10:39.572 "uuid": "3d73cc14-d1c3-4c57-8d14-2fce06870c59", 00:10:39.572 "strip_size_kb": 64, 00:10:39.572 "state": "online", 00:10:39.572 "raid_level": "raid0", 00:10:39.572 "superblock": false, 00:10:39.572 "num_base_bdevs": 4, 00:10:39.572 "num_base_bdevs_discovered": 4, 00:10:39.572 "num_base_bdevs_operational": 4, 00:10:39.572 "base_bdevs_list": [ 00:10:39.572 { 00:10:39.572 "name": "NewBaseBdev", 00:10:39.572 "uuid": "730ae29a-82d0-4984-9fd9-42103667cab3", 00:10:39.572 "is_configured": true, 00:10:39.572 "data_offset": 0, 00:10:39.572 "data_size": 65536 00:10:39.572 }, 00:10:39.572 { 00:10:39.572 "name": "BaseBdev2", 00:10:39.572 "uuid": "8c9c7594-8fc6-4a3d-8345-1e8e6dba163f", 00:10:39.572 "is_configured": true, 00:10:39.572 "data_offset": 0, 00:10:39.572 "data_size": 65536 00:10:39.572 }, 00:10:39.572 { 00:10:39.572 "name": "BaseBdev3", 00:10:39.572 "uuid": "cba89cb1-d54c-45aa-a949-6b7fd8fabc22", 00:10:39.572 "is_configured": true, 00:10:39.572 "data_offset": 0, 00:10:39.572 "data_size": 65536 00:10:39.572 }, 00:10:39.572 { 00:10:39.572 "name": "BaseBdev4", 00:10:39.572 "uuid": "46dec9bc-9458-4c4e-b9ce-5add040684f6", 00:10:39.572 "is_configured": true, 00:10:39.572 "data_offset": 0, 00:10:39.572 "data_size": 65536 00:10:39.572 } 00:10:39.572 ] 00:10:39.572 } 00:10:39.572 } 00:10:39.572 }' 00:10:39.572 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:39.572 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:39.572 BaseBdev2 00:10:39.572 BaseBdev3 00:10:39.572 BaseBdev4' 00:10:39.572 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.572 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:39.572 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.572 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:39.572 03:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.572 03:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.572 03:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.572 03:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.572 03:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.572 03:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.572 03:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.572 03:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:39.572 03:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.572 03:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.572 03:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.573 03:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.573 03:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.573 03:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.573 03:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.573 03:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:39.573 03:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.573 03:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.573 03:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.573 03:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.831 03:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.831 03:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.831 03:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.831 03:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.831 03:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:39.831 03:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.831 03:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.831 03:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.831 03:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.831 03:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.831 03:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:39.831 03:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.831 03:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.831 [2024-11-21 03:19:27.186838] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:39.831 [2024-11-21 03:19:27.186886] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:39.831 [2024-11-21 03:19:27.186974] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:39.831 [2024-11-21 03:19:27.187087] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:39.831 [2024-11-21 03:19:27.187112] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:39.831 03:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.831 03:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82394 00:10:39.831 03:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82394 ']' 00:10:39.831 03:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82394 00:10:39.831 03:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:39.831 03:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:39.831 03:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82394 00:10:39.831 03:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:39.831 03:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:39.831 03:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82394' 00:10:39.831 killing process with pid 82394 00:10:39.831 03:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 82394 00:10:39.831 [2024-11-21 03:19:27.237462] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:39.831 03:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 82394 00:10:39.831 [2024-11-21 03:19:27.281300] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:40.091 00:10:40.091 real 0m9.666s 00:10:40.091 user 0m16.428s 00:10:40.091 sys 0m2.087s 00:10:40.091 ************************************ 00:10:40.091 END TEST raid_state_function_test 00:10:40.091 ************************************ 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.091 03:19:27 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:40.091 03:19:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:40.091 03:19:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.091 03:19:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:40.091 ************************************ 00:10:40.091 START TEST raid_state_function_test_sb 00:10:40.091 ************************************ 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83049 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83049' 00:10:40.091 Process raid pid: 83049 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83049 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83049 ']' 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:40.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:40.091 03:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.350 [2024-11-21 03:19:27.677565] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:10:40.350 [2024-11-21 03:19:27.677730] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.350 [2024-11-21 03:19:27.821503] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:40.350 [2024-11-21 03:19:27.859258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.350 [2024-11-21 03:19:27.889913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.608 [2024-11-21 03:19:27.934052] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:40.608 [2024-11-21 03:19:27.934088] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:41.179 03:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:41.179 03:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:41.179 03:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:41.179 03:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.179 03:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.179 [2024-11-21 03:19:28.517297] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:41.180 [2024-11-21 03:19:28.517370] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:41.180 [2024-11-21 03:19:28.517396] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:41.180 [2024-11-21 03:19:28.517407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:41.180 [2024-11-21 03:19:28.517418] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:41.180 [2024-11-21 03:19:28.517427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:41.180 [2024-11-21 03:19:28.517437] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:41.180 [2024-11-21 03:19:28.517445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:41.180 03:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.180 03:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:41.180 03:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.180 03:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.180 03:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.180 03:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.180 03:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.180 03:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.180 03:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.180 03:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.180 03:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.180 03:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.180 03:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.180 03:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.180 03:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.180 03:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.180 03:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.180 "name": "Existed_Raid", 00:10:41.180 "uuid": "ae4383fd-371c-446e-932a-f3eadaf755cf", 00:10:41.180 "strip_size_kb": 64, 00:10:41.180 "state": "configuring", 00:10:41.180 "raid_level": "raid0", 00:10:41.180 "superblock": true, 00:10:41.180 "num_base_bdevs": 4, 00:10:41.180 "num_base_bdevs_discovered": 0, 00:10:41.180 "num_base_bdevs_operational": 4, 00:10:41.180 "base_bdevs_list": [ 00:10:41.180 { 00:10:41.180 "name": "BaseBdev1", 00:10:41.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.180 "is_configured": false, 00:10:41.180 "data_offset": 0, 00:10:41.180 "data_size": 0 00:10:41.180 }, 00:10:41.180 { 00:10:41.180 "name": "BaseBdev2", 00:10:41.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.180 "is_configured": false, 00:10:41.180 "data_offset": 0, 00:10:41.180 "data_size": 0 00:10:41.180 }, 00:10:41.180 { 00:10:41.180 "name": "BaseBdev3", 00:10:41.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.180 "is_configured": false, 00:10:41.180 "data_offset": 0, 00:10:41.180 "data_size": 0 00:10:41.180 }, 00:10:41.180 { 00:10:41.180 "name": "BaseBdev4", 00:10:41.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.180 "is_configured": false, 00:10:41.180 "data_offset": 0, 00:10:41.180 "data_size": 0 00:10:41.180 } 00:10:41.180 ] 00:10:41.180 }' 00:10:41.180 03:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.180 03:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.452 03:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:41.452 03:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.452 03:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.452 [2024-11-21 03:19:28.957306] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:41.452 [2024-11-21 03:19:28.957450] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:10:41.452 03:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.452 03:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:41.452 03:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.452 03:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.452 [2024-11-21 03:19:28.969365] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:41.452 [2024-11-21 03:19:28.969487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:41.452 [2024-11-21 03:19:28.969522] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:41.452 [2024-11-21 03:19:28.969549] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:41.452 [2024-11-21 03:19:28.969573] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:41.452 [2024-11-21 03:19:28.969596] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:41.452 [2024-11-21 03:19:28.969619] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:41.452 [2024-11-21 03:19:28.969651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:41.452 03:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.452 03:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:41.452 03:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.452 03:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.452 [2024-11-21 03:19:28.990481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:41.452 BaseBdev1 00:10:41.452 03:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.452 03:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:41.452 03:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:41.452 03:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.452 03:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:41.452 03:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.452 03:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.452 03:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.452 03:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.453 03:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.453 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.453 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:41.453 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.453 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.727 [ 00:10:41.727 { 00:10:41.727 "name": "BaseBdev1", 00:10:41.727 "aliases": [ 00:10:41.727 "09028598-7b23-4301-b328-c147658e5140" 00:10:41.727 ], 00:10:41.727 "product_name": "Malloc disk", 00:10:41.727 "block_size": 512, 00:10:41.727 "num_blocks": 65536, 00:10:41.727 "uuid": "09028598-7b23-4301-b328-c147658e5140", 00:10:41.727 "assigned_rate_limits": { 00:10:41.727 "rw_ios_per_sec": 0, 00:10:41.728 "rw_mbytes_per_sec": 0, 00:10:41.728 "r_mbytes_per_sec": 0, 00:10:41.728 "w_mbytes_per_sec": 0 00:10:41.728 }, 00:10:41.728 "claimed": true, 00:10:41.728 "claim_type": "exclusive_write", 00:10:41.728 "zoned": false, 00:10:41.728 "supported_io_types": { 00:10:41.728 "read": true, 00:10:41.728 "write": true, 00:10:41.728 "unmap": true, 00:10:41.728 "flush": true, 00:10:41.728 "reset": true, 00:10:41.728 "nvme_admin": false, 00:10:41.728 "nvme_io": false, 00:10:41.728 "nvme_io_md": false, 00:10:41.728 "write_zeroes": true, 00:10:41.728 "zcopy": true, 00:10:41.728 "get_zone_info": false, 00:10:41.728 "zone_management": false, 00:10:41.728 "zone_append": false, 00:10:41.728 "compare": false, 00:10:41.728 "compare_and_write": false, 00:10:41.728 "abort": true, 00:10:41.728 "seek_hole": false, 00:10:41.728 "seek_data": false, 00:10:41.728 "copy": true, 00:10:41.728 "nvme_iov_md": false 00:10:41.728 }, 00:10:41.728 "memory_domains": [ 00:10:41.728 { 00:10:41.728 "dma_device_id": "system", 00:10:41.728 "dma_device_type": 1 00:10:41.728 }, 00:10:41.728 { 00:10:41.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.728 "dma_device_type": 2 00:10:41.728 } 00:10:41.728 ], 00:10:41.728 "driver_specific": {} 00:10:41.728 } 00:10:41.728 ] 00:10:41.728 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.728 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:41.728 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:41.728 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.728 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.728 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.728 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.728 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.728 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.728 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.728 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.728 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.728 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.728 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.728 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.728 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.728 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.728 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.728 "name": "Existed_Raid", 00:10:41.728 "uuid": "a45f3d6f-ff24-4f47-b3b0-44722c48997a", 00:10:41.728 "strip_size_kb": 64, 00:10:41.728 "state": "configuring", 00:10:41.728 "raid_level": "raid0", 00:10:41.728 "superblock": true, 00:10:41.728 "num_base_bdevs": 4, 00:10:41.728 "num_base_bdevs_discovered": 1, 00:10:41.728 "num_base_bdevs_operational": 4, 00:10:41.728 "base_bdevs_list": [ 00:10:41.728 { 00:10:41.728 "name": "BaseBdev1", 00:10:41.728 "uuid": "09028598-7b23-4301-b328-c147658e5140", 00:10:41.728 "is_configured": true, 00:10:41.728 "data_offset": 2048, 00:10:41.728 "data_size": 63488 00:10:41.728 }, 00:10:41.728 { 00:10:41.728 "name": "BaseBdev2", 00:10:41.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.728 "is_configured": false, 00:10:41.728 "data_offset": 0, 00:10:41.728 "data_size": 0 00:10:41.728 }, 00:10:41.728 { 00:10:41.728 "name": "BaseBdev3", 00:10:41.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.728 "is_configured": false, 00:10:41.728 "data_offset": 0, 00:10:41.728 "data_size": 0 00:10:41.728 }, 00:10:41.728 { 00:10:41.728 "name": "BaseBdev4", 00:10:41.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.728 "is_configured": false, 00:10:41.728 "data_offset": 0, 00:10:41.728 "data_size": 0 00:10:41.728 } 00:10:41.728 ] 00:10:41.728 }' 00:10:41.728 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.728 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.987 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:41.987 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.987 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.987 [2024-11-21 03:19:29.486727] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:41.987 [2024-11-21 03:19:29.486812] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:41.987 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.987 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:41.987 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.987 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.987 [2024-11-21 03:19:29.498771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:41.987 [2024-11-21 03:19:29.500992] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:41.987 [2024-11-21 03:19:29.501058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:41.987 [2024-11-21 03:19:29.501071] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:41.987 [2024-11-21 03:19:29.501081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:41.987 [2024-11-21 03:19:29.501090] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:41.987 [2024-11-21 03:19:29.501098] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:41.987 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.987 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:41.987 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:41.987 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:41.987 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.987 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.987 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.987 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.987 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.987 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.987 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.987 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.987 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.987 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.987 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.987 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.987 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.987 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.245 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.245 "name": "Existed_Raid", 00:10:42.245 "uuid": "398dac86-f470-4ce6-8a28-2224ed9c5fda", 00:10:42.245 "strip_size_kb": 64, 00:10:42.245 "state": "configuring", 00:10:42.245 "raid_level": "raid0", 00:10:42.245 "superblock": true, 00:10:42.245 "num_base_bdevs": 4, 00:10:42.245 "num_base_bdevs_discovered": 1, 00:10:42.245 "num_base_bdevs_operational": 4, 00:10:42.245 "base_bdevs_list": [ 00:10:42.245 { 00:10:42.245 "name": "BaseBdev1", 00:10:42.245 "uuid": "09028598-7b23-4301-b328-c147658e5140", 00:10:42.245 "is_configured": true, 00:10:42.245 "data_offset": 2048, 00:10:42.245 "data_size": 63488 00:10:42.245 }, 00:10:42.245 { 00:10:42.245 "name": "BaseBdev2", 00:10:42.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.245 "is_configured": false, 00:10:42.245 "data_offset": 0, 00:10:42.245 "data_size": 0 00:10:42.245 }, 00:10:42.245 { 00:10:42.245 "name": "BaseBdev3", 00:10:42.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.245 "is_configured": false, 00:10:42.245 "data_offset": 0, 00:10:42.245 "data_size": 0 00:10:42.245 }, 00:10:42.245 { 00:10:42.245 "name": "BaseBdev4", 00:10:42.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.245 "is_configured": false, 00:10:42.245 "data_offset": 0, 00:10:42.245 "data_size": 0 00:10:42.245 } 00:10:42.245 ] 00:10:42.245 }' 00:10:42.245 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.245 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.504 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:42.504 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.504 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.505 [2024-11-21 03:19:29.962082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:42.505 BaseBdev2 00:10:42.505 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.505 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:42.505 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:42.505 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:42.505 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:42.505 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:42.505 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:42.505 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:42.505 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.505 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.505 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.505 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:42.505 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.505 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.505 [ 00:10:42.505 { 00:10:42.505 "name": "BaseBdev2", 00:10:42.505 "aliases": [ 00:10:42.505 "3a009967-a3f0-47ee-9d2a-1878499143ba" 00:10:42.505 ], 00:10:42.505 "product_name": "Malloc disk", 00:10:42.505 "block_size": 512, 00:10:42.505 "num_blocks": 65536, 00:10:42.505 "uuid": "3a009967-a3f0-47ee-9d2a-1878499143ba", 00:10:42.505 "assigned_rate_limits": { 00:10:42.505 "rw_ios_per_sec": 0, 00:10:42.505 "rw_mbytes_per_sec": 0, 00:10:42.505 "r_mbytes_per_sec": 0, 00:10:42.505 "w_mbytes_per_sec": 0 00:10:42.505 }, 00:10:42.505 "claimed": true, 00:10:42.505 "claim_type": "exclusive_write", 00:10:42.505 "zoned": false, 00:10:42.505 "supported_io_types": { 00:10:42.505 "read": true, 00:10:42.505 "write": true, 00:10:42.505 "unmap": true, 00:10:42.505 "flush": true, 00:10:42.505 "reset": true, 00:10:42.505 "nvme_admin": false, 00:10:42.505 "nvme_io": false, 00:10:42.505 "nvme_io_md": false, 00:10:42.505 "write_zeroes": true, 00:10:42.505 "zcopy": true, 00:10:42.505 "get_zone_info": false, 00:10:42.505 "zone_management": false, 00:10:42.505 "zone_append": false, 00:10:42.505 "compare": false, 00:10:42.505 "compare_and_write": false, 00:10:42.505 "abort": true, 00:10:42.505 "seek_hole": false, 00:10:42.505 "seek_data": false, 00:10:42.505 "copy": true, 00:10:42.505 "nvme_iov_md": false 00:10:42.505 }, 00:10:42.505 "memory_domains": [ 00:10:42.505 { 00:10:42.505 "dma_device_id": "system", 00:10:42.505 "dma_device_type": 1 00:10:42.505 }, 00:10:42.505 { 00:10:42.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.505 "dma_device_type": 2 00:10:42.505 } 00:10:42.505 ], 00:10:42.505 "driver_specific": {} 00:10:42.505 } 00:10:42.505 ] 00:10:42.505 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.505 03:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:42.505 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:42.505 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:42.505 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:42.505 03:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.505 03:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.505 03:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:42.505 03:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.505 03:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.505 03:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.505 03:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.505 03:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.505 03:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.505 03:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.505 03:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.505 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.505 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.505 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.505 03:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.505 "name": "Existed_Raid", 00:10:42.505 "uuid": "398dac86-f470-4ce6-8a28-2224ed9c5fda", 00:10:42.505 "strip_size_kb": 64, 00:10:42.505 "state": "configuring", 00:10:42.505 "raid_level": "raid0", 00:10:42.505 "superblock": true, 00:10:42.505 "num_base_bdevs": 4, 00:10:42.505 "num_base_bdevs_discovered": 2, 00:10:42.505 "num_base_bdevs_operational": 4, 00:10:42.505 "base_bdevs_list": [ 00:10:42.505 { 00:10:42.505 "name": "BaseBdev1", 00:10:42.505 "uuid": "09028598-7b23-4301-b328-c147658e5140", 00:10:42.505 "is_configured": true, 00:10:42.505 "data_offset": 2048, 00:10:42.505 "data_size": 63488 00:10:42.505 }, 00:10:42.505 { 00:10:42.505 "name": "BaseBdev2", 00:10:42.505 "uuid": "3a009967-a3f0-47ee-9d2a-1878499143ba", 00:10:42.505 "is_configured": true, 00:10:42.505 "data_offset": 2048, 00:10:42.505 "data_size": 63488 00:10:42.505 }, 00:10:42.505 { 00:10:42.505 "name": "BaseBdev3", 00:10:42.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.505 "is_configured": false, 00:10:42.505 "data_offset": 0, 00:10:42.505 "data_size": 0 00:10:42.505 }, 00:10:42.505 { 00:10:42.505 "name": "BaseBdev4", 00:10:42.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.505 "is_configured": false, 00:10:42.505 "data_offset": 0, 00:10:42.505 "data_size": 0 00:10:42.505 } 00:10:42.505 ] 00:10:42.505 }' 00:10:42.505 03:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.505 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.074 [2024-11-21 03:19:30.475650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:43.074 BaseBdev3 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.074 [ 00:10:43.074 { 00:10:43.074 "name": "BaseBdev3", 00:10:43.074 "aliases": [ 00:10:43.074 "f683c23e-6bff-47ce-a50b-6b5fbaaf8e13" 00:10:43.074 ], 00:10:43.074 "product_name": "Malloc disk", 00:10:43.074 "block_size": 512, 00:10:43.074 "num_blocks": 65536, 00:10:43.074 "uuid": "f683c23e-6bff-47ce-a50b-6b5fbaaf8e13", 00:10:43.074 "assigned_rate_limits": { 00:10:43.074 "rw_ios_per_sec": 0, 00:10:43.074 "rw_mbytes_per_sec": 0, 00:10:43.074 "r_mbytes_per_sec": 0, 00:10:43.074 "w_mbytes_per_sec": 0 00:10:43.074 }, 00:10:43.074 "claimed": true, 00:10:43.074 "claim_type": "exclusive_write", 00:10:43.074 "zoned": false, 00:10:43.074 "supported_io_types": { 00:10:43.074 "read": true, 00:10:43.074 "write": true, 00:10:43.074 "unmap": true, 00:10:43.074 "flush": true, 00:10:43.074 "reset": true, 00:10:43.074 "nvme_admin": false, 00:10:43.074 "nvme_io": false, 00:10:43.074 "nvme_io_md": false, 00:10:43.074 "write_zeroes": true, 00:10:43.074 "zcopy": true, 00:10:43.074 "get_zone_info": false, 00:10:43.074 "zone_management": false, 00:10:43.074 "zone_append": false, 00:10:43.074 "compare": false, 00:10:43.074 "compare_and_write": false, 00:10:43.074 "abort": true, 00:10:43.074 "seek_hole": false, 00:10:43.074 "seek_data": false, 00:10:43.074 "copy": true, 00:10:43.074 "nvme_iov_md": false 00:10:43.074 }, 00:10:43.074 "memory_domains": [ 00:10:43.074 { 00:10:43.074 "dma_device_id": "system", 00:10:43.074 "dma_device_type": 1 00:10:43.074 }, 00:10:43.074 { 00:10:43.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.074 "dma_device_type": 2 00:10:43.074 } 00:10:43.074 ], 00:10:43.074 "driver_specific": {} 00:10:43.074 } 00:10:43.074 ] 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.074 03:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.074 "name": "Existed_Raid", 00:10:43.075 "uuid": "398dac86-f470-4ce6-8a28-2224ed9c5fda", 00:10:43.075 "strip_size_kb": 64, 00:10:43.075 "state": "configuring", 00:10:43.075 "raid_level": "raid0", 00:10:43.075 "superblock": true, 00:10:43.075 "num_base_bdevs": 4, 00:10:43.075 "num_base_bdevs_discovered": 3, 00:10:43.075 "num_base_bdevs_operational": 4, 00:10:43.075 "base_bdevs_list": [ 00:10:43.075 { 00:10:43.075 "name": "BaseBdev1", 00:10:43.075 "uuid": "09028598-7b23-4301-b328-c147658e5140", 00:10:43.075 "is_configured": true, 00:10:43.075 "data_offset": 2048, 00:10:43.075 "data_size": 63488 00:10:43.075 }, 00:10:43.075 { 00:10:43.075 "name": "BaseBdev2", 00:10:43.075 "uuid": "3a009967-a3f0-47ee-9d2a-1878499143ba", 00:10:43.075 "is_configured": true, 00:10:43.075 "data_offset": 2048, 00:10:43.075 "data_size": 63488 00:10:43.075 }, 00:10:43.075 { 00:10:43.075 "name": "BaseBdev3", 00:10:43.075 "uuid": "f683c23e-6bff-47ce-a50b-6b5fbaaf8e13", 00:10:43.075 "is_configured": true, 00:10:43.075 "data_offset": 2048, 00:10:43.075 "data_size": 63488 00:10:43.075 }, 00:10:43.075 { 00:10:43.075 "name": "BaseBdev4", 00:10:43.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.075 "is_configured": false, 00:10:43.075 "data_offset": 0, 00:10:43.075 "data_size": 0 00:10:43.075 } 00:10:43.075 ] 00:10:43.075 }' 00:10:43.075 03:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.075 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.644 03:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:43.644 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.644 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.644 [2024-11-21 03:19:30.991170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:43.644 [2024-11-21 03:19:30.991396] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:43.644 [2024-11-21 03:19:30.991419] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:43.644 BaseBdev4 00:10:43.644 [2024-11-21 03:19:30.991721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:10:43.644 [2024-11-21 03:19:30.991877] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:43.644 [2024-11-21 03:19:30.991900] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:10:43.644 [2024-11-21 03:19:30.992063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.644 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.644 03:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:43.644 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:43.644 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:43.644 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:43.644 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:43.644 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:43.644 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:43.644 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.644 03:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.644 03:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.644 03:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:43.644 03:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.644 03:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.644 [ 00:10:43.644 { 00:10:43.644 "name": "BaseBdev4", 00:10:43.644 "aliases": [ 00:10:43.644 "e3ba1c47-5d18-4621-bcae-375ac8efbd6a" 00:10:43.644 ], 00:10:43.644 "product_name": "Malloc disk", 00:10:43.644 "block_size": 512, 00:10:43.644 "num_blocks": 65536, 00:10:43.644 "uuid": "e3ba1c47-5d18-4621-bcae-375ac8efbd6a", 00:10:43.644 "assigned_rate_limits": { 00:10:43.644 "rw_ios_per_sec": 0, 00:10:43.644 "rw_mbytes_per_sec": 0, 00:10:43.644 "r_mbytes_per_sec": 0, 00:10:43.644 "w_mbytes_per_sec": 0 00:10:43.644 }, 00:10:43.644 "claimed": true, 00:10:43.644 "claim_type": "exclusive_write", 00:10:43.644 "zoned": false, 00:10:43.644 "supported_io_types": { 00:10:43.644 "read": true, 00:10:43.644 "write": true, 00:10:43.644 "unmap": true, 00:10:43.644 "flush": true, 00:10:43.644 "reset": true, 00:10:43.644 "nvme_admin": false, 00:10:43.644 "nvme_io": false, 00:10:43.644 "nvme_io_md": false, 00:10:43.644 "write_zeroes": true, 00:10:43.644 "zcopy": true, 00:10:43.644 "get_zone_info": false, 00:10:43.644 "zone_management": false, 00:10:43.644 "zone_append": false, 00:10:43.644 "compare": false, 00:10:43.644 "compare_and_write": false, 00:10:43.644 "abort": true, 00:10:43.644 "seek_hole": false, 00:10:43.644 "seek_data": false, 00:10:43.644 "copy": true, 00:10:43.644 "nvme_iov_md": false 00:10:43.644 }, 00:10:43.644 "memory_domains": [ 00:10:43.644 { 00:10:43.644 "dma_device_id": "system", 00:10:43.644 "dma_device_type": 1 00:10:43.644 }, 00:10:43.644 { 00:10:43.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.644 "dma_device_type": 2 00:10:43.644 } 00:10:43.644 ], 00:10:43.644 "driver_specific": {} 00:10:43.644 } 00:10:43.644 ] 00:10:43.644 03:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.644 03:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:43.644 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:43.644 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:43.644 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:43.644 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.644 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:43.644 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.644 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.644 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.644 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.644 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.644 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.644 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.644 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.644 03:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.644 03:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.644 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.644 03:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.644 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.644 "name": "Existed_Raid", 00:10:43.644 "uuid": "398dac86-f470-4ce6-8a28-2224ed9c5fda", 00:10:43.644 "strip_size_kb": 64, 00:10:43.644 "state": "online", 00:10:43.644 "raid_level": "raid0", 00:10:43.644 "superblock": true, 00:10:43.644 "num_base_bdevs": 4, 00:10:43.644 "num_base_bdevs_discovered": 4, 00:10:43.644 "num_base_bdevs_operational": 4, 00:10:43.644 "base_bdevs_list": [ 00:10:43.644 { 00:10:43.644 "name": "BaseBdev1", 00:10:43.644 "uuid": "09028598-7b23-4301-b328-c147658e5140", 00:10:43.644 "is_configured": true, 00:10:43.644 "data_offset": 2048, 00:10:43.644 "data_size": 63488 00:10:43.644 }, 00:10:43.644 { 00:10:43.644 "name": "BaseBdev2", 00:10:43.644 "uuid": "3a009967-a3f0-47ee-9d2a-1878499143ba", 00:10:43.644 "is_configured": true, 00:10:43.644 "data_offset": 2048, 00:10:43.644 "data_size": 63488 00:10:43.644 }, 00:10:43.644 { 00:10:43.644 "name": "BaseBdev3", 00:10:43.644 "uuid": "f683c23e-6bff-47ce-a50b-6b5fbaaf8e13", 00:10:43.644 "is_configured": true, 00:10:43.644 "data_offset": 2048, 00:10:43.644 "data_size": 63488 00:10:43.644 }, 00:10:43.644 { 00:10:43.644 "name": "BaseBdev4", 00:10:43.644 "uuid": "e3ba1c47-5d18-4621-bcae-375ac8efbd6a", 00:10:43.644 "is_configured": true, 00:10:43.644 "data_offset": 2048, 00:10:43.644 "data_size": 63488 00:10:43.644 } 00:10:43.644 ] 00:10:43.644 }' 00:10:43.644 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.644 03:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.213 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:44.213 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:44.213 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:44.213 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:44.213 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:44.213 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:44.213 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:44.213 03:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.213 03:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.213 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:44.213 [2024-11-21 03:19:31.515770] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:44.214 03:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.214 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:44.214 "name": "Existed_Raid", 00:10:44.214 "aliases": [ 00:10:44.214 "398dac86-f470-4ce6-8a28-2224ed9c5fda" 00:10:44.214 ], 00:10:44.214 "product_name": "Raid Volume", 00:10:44.214 "block_size": 512, 00:10:44.214 "num_blocks": 253952, 00:10:44.214 "uuid": "398dac86-f470-4ce6-8a28-2224ed9c5fda", 00:10:44.214 "assigned_rate_limits": { 00:10:44.214 "rw_ios_per_sec": 0, 00:10:44.214 "rw_mbytes_per_sec": 0, 00:10:44.214 "r_mbytes_per_sec": 0, 00:10:44.214 "w_mbytes_per_sec": 0 00:10:44.214 }, 00:10:44.214 "claimed": false, 00:10:44.214 "zoned": false, 00:10:44.214 "supported_io_types": { 00:10:44.214 "read": true, 00:10:44.214 "write": true, 00:10:44.214 "unmap": true, 00:10:44.214 "flush": true, 00:10:44.214 "reset": true, 00:10:44.214 "nvme_admin": false, 00:10:44.214 "nvme_io": false, 00:10:44.214 "nvme_io_md": false, 00:10:44.214 "write_zeroes": true, 00:10:44.214 "zcopy": false, 00:10:44.214 "get_zone_info": false, 00:10:44.214 "zone_management": false, 00:10:44.214 "zone_append": false, 00:10:44.214 "compare": false, 00:10:44.214 "compare_and_write": false, 00:10:44.214 "abort": false, 00:10:44.214 "seek_hole": false, 00:10:44.214 "seek_data": false, 00:10:44.214 "copy": false, 00:10:44.214 "nvme_iov_md": false 00:10:44.214 }, 00:10:44.214 "memory_domains": [ 00:10:44.214 { 00:10:44.214 "dma_device_id": "system", 00:10:44.214 "dma_device_type": 1 00:10:44.214 }, 00:10:44.214 { 00:10:44.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.214 "dma_device_type": 2 00:10:44.214 }, 00:10:44.214 { 00:10:44.214 "dma_device_id": "system", 00:10:44.214 "dma_device_type": 1 00:10:44.214 }, 00:10:44.214 { 00:10:44.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.214 "dma_device_type": 2 00:10:44.214 }, 00:10:44.214 { 00:10:44.214 "dma_device_id": "system", 00:10:44.214 "dma_device_type": 1 00:10:44.214 }, 00:10:44.214 { 00:10:44.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.214 "dma_device_type": 2 00:10:44.214 }, 00:10:44.214 { 00:10:44.214 "dma_device_id": "system", 00:10:44.214 "dma_device_type": 1 00:10:44.214 }, 00:10:44.214 { 00:10:44.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.214 "dma_device_type": 2 00:10:44.214 } 00:10:44.214 ], 00:10:44.214 "driver_specific": { 00:10:44.214 "raid": { 00:10:44.214 "uuid": "398dac86-f470-4ce6-8a28-2224ed9c5fda", 00:10:44.214 "strip_size_kb": 64, 00:10:44.214 "state": "online", 00:10:44.214 "raid_level": "raid0", 00:10:44.214 "superblock": true, 00:10:44.214 "num_base_bdevs": 4, 00:10:44.214 "num_base_bdevs_discovered": 4, 00:10:44.214 "num_base_bdevs_operational": 4, 00:10:44.214 "base_bdevs_list": [ 00:10:44.214 { 00:10:44.214 "name": "BaseBdev1", 00:10:44.214 "uuid": "09028598-7b23-4301-b328-c147658e5140", 00:10:44.214 "is_configured": true, 00:10:44.214 "data_offset": 2048, 00:10:44.214 "data_size": 63488 00:10:44.214 }, 00:10:44.214 { 00:10:44.214 "name": "BaseBdev2", 00:10:44.214 "uuid": "3a009967-a3f0-47ee-9d2a-1878499143ba", 00:10:44.214 "is_configured": true, 00:10:44.214 "data_offset": 2048, 00:10:44.214 "data_size": 63488 00:10:44.214 }, 00:10:44.214 { 00:10:44.214 "name": "BaseBdev3", 00:10:44.214 "uuid": "f683c23e-6bff-47ce-a50b-6b5fbaaf8e13", 00:10:44.214 "is_configured": true, 00:10:44.214 "data_offset": 2048, 00:10:44.214 "data_size": 63488 00:10:44.214 }, 00:10:44.214 { 00:10:44.214 "name": "BaseBdev4", 00:10:44.214 "uuid": "e3ba1c47-5d18-4621-bcae-375ac8efbd6a", 00:10:44.214 "is_configured": true, 00:10:44.214 "data_offset": 2048, 00:10:44.214 "data_size": 63488 00:10:44.214 } 00:10:44.214 ] 00:10:44.214 } 00:10:44.214 } 00:10:44.214 }' 00:10:44.214 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:44.214 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:44.214 BaseBdev2 00:10:44.214 BaseBdev3 00:10:44.214 BaseBdev4' 00:10:44.214 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.214 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:44.214 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.214 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.214 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:44.214 03:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.214 03:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.214 03:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.214 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.214 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.214 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.214 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.214 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:44.214 03:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.214 03:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.214 03:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.214 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.214 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.214 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.215 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:44.215 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.215 03:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.215 03:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.215 03:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.215 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.215 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.215 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.474 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:44.474 03:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.474 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.474 03:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.474 03:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.474 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.474 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.475 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:44.475 03:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.475 03:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.475 [2024-11-21 03:19:31.827577] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:44.475 [2024-11-21 03:19:31.827709] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:44.475 [2024-11-21 03:19:31.827810] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:44.475 03:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.475 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:44.475 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:44.475 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:44.475 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:44.475 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:44.475 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:44.475 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.475 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:44.475 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.475 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.475 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.475 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.475 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.475 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.475 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.475 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.475 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.475 03:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.475 03:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.475 03:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.475 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.475 "name": "Existed_Raid", 00:10:44.475 "uuid": "398dac86-f470-4ce6-8a28-2224ed9c5fda", 00:10:44.475 "strip_size_kb": 64, 00:10:44.475 "state": "offline", 00:10:44.475 "raid_level": "raid0", 00:10:44.475 "superblock": true, 00:10:44.475 "num_base_bdevs": 4, 00:10:44.475 "num_base_bdevs_discovered": 3, 00:10:44.475 "num_base_bdevs_operational": 3, 00:10:44.475 "base_bdevs_list": [ 00:10:44.475 { 00:10:44.475 "name": null, 00:10:44.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.475 "is_configured": false, 00:10:44.475 "data_offset": 0, 00:10:44.475 "data_size": 63488 00:10:44.475 }, 00:10:44.475 { 00:10:44.475 "name": "BaseBdev2", 00:10:44.475 "uuid": "3a009967-a3f0-47ee-9d2a-1878499143ba", 00:10:44.475 "is_configured": true, 00:10:44.475 "data_offset": 2048, 00:10:44.475 "data_size": 63488 00:10:44.475 }, 00:10:44.475 { 00:10:44.475 "name": "BaseBdev3", 00:10:44.475 "uuid": "f683c23e-6bff-47ce-a50b-6b5fbaaf8e13", 00:10:44.475 "is_configured": true, 00:10:44.475 "data_offset": 2048, 00:10:44.475 "data_size": 63488 00:10:44.475 }, 00:10:44.475 { 00:10:44.475 "name": "BaseBdev4", 00:10:44.475 "uuid": "e3ba1c47-5d18-4621-bcae-375ac8efbd6a", 00:10:44.475 "is_configured": true, 00:10:44.475 "data_offset": 2048, 00:10:44.475 "data_size": 63488 00:10:44.475 } 00:10:44.475 ] 00:10:44.475 }' 00:10:44.475 03:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.475 03:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.045 [2024-11-21 03:19:32.363548] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.045 [2024-11-21 03:19:32.435344] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.045 [2024-11-21 03:19:32.503119] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:45.045 [2024-11-21 03:19:32.503294] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.045 BaseBdev2 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:45.045 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.046 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.046 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.046 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.046 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.046 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.046 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:45.046 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.046 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.306 [ 00:10:45.306 { 00:10:45.306 "name": "BaseBdev2", 00:10:45.306 "aliases": [ 00:10:45.306 "7fc1546a-ae38-4069-b846-7588056ff4bb" 00:10:45.306 ], 00:10:45.306 "product_name": "Malloc disk", 00:10:45.306 "block_size": 512, 00:10:45.306 "num_blocks": 65536, 00:10:45.306 "uuid": "7fc1546a-ae38-4069-b846-7588056ff4bb", 00:10:45.306 "assigned_rate_limits": { 00:10:45.306 "rw_ios_per_sec": 0, 00:10:45.306 "rw_mbytes_per_sec": 0, 00:10:45.306 "r_mbytes_per_sec": 0, 00:10:45.306 "w_mbytes_per_sec": 0 00:10:45.306 }, 00:10:45.306 "claimed": false, 00:10:45.306 "zoned": false, 00:10:45.306 "supported_io_types": { 00:10:45.306 "read": true, 00:10:45.306 "write": true, 00:10:45.306 "unmap": true, 00:10:45.306 "flush": true, 00:10:45.306 "reset": true, 00:10:45.306 "nvme_admin": false, 00:10:45.306 "nvme_io": false, 00:10:45.306 "nvme_io_md": false, 00:10:45.306 "write_zeroes": true, 00:10:45.306 "zcopy": true, 00:10:45.306 "get_zone_info": false, 00:10:45.306 "zone_management": false, 00:10:45.306 "zone_append": false, 00:10:45.306 "compare": false, 00:10:45.306 "compare_and_write": false, 00:10:45.306 "abort": true, 00:10:45.306 "seek_hole": false, 00:10:45.306 "seek_data": false, 00:10:45.306 "copy": true, 00:10:45.306 "nvme_iov_md": false 00:10:45.306 }, 00:10:45.306 "memory_domains": [ 00:10:45.306 { 00:10:45.306 "dma_device_id": "system", 00:10:45.306 "dma_device_type": 1 00:10:45.306 }, 00:10:45.306 { 00:10:45.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.306 "dma_device_type": 2 00:10:45.307 } 00:10:45.307 ], 00:10:45.307 "driver_specific": {} 00:10:45.307 } 00:10:45.307 ] 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.307 BaseBdev3 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.307 [ 00:10:45.307 { 00:10:45.307 "name": "BaseBdev3", 00:10:45.307 "aliases": [ 00:10:45.307 "6222446a-724d-4040-876f-1ef730a42a56" 00:10:45.307 ], 00:10:45.307 "product_name": "Malloc disk", 00:10:45.307 "block_size": 512, 00:10:45.307 "num_blocks": 65536, 00:10:45.307 "uuid": "6222446a-724d-4040-876f-1ef730a42a56", 00:10:45.307 "assigned_rate_limits": { 00:10:45.307 "rw_ios_per_sec": 0, 00:10:45.307 "rw_mbytes_per_sec": 0, 00:10:45.307 "r_mbytes_per_sec": 0, 00:10:45.307 "w_mbytes_per_sec": 0 00:10:45.307 }, 00:10:45.307 "claimed": false, 00:10:45.307 "zoned": false, 00:10:45.307 "supported_io_types": { 00:10:45.307 "read": true, 00:10:45.307 "write": true, 00:10:45.307 "unmap": true, 00:10:45.307 "flush": true, 00:10:45.307 "reset": true, 00:10:45.307 "nvme_admin": false, 00:10:45.307 "nvme_io": false, 00:10:45.307 "nvme_io_md": false, 00:10:45.307 "write_zeroes": true, 00:10:45.307 "zcopy": true, 00:10:45.307 "get_zone_info": false, 00:10:45.307 "zone_management": false, 00:10:45.307 "zone_append": false, 00:10:45.307 "compare": false, 00:10:45.307 "compare_and_write": false, 00:10:45.307 "abort": true, 00:10:45.307 "seek_hole": false, 00:10:45.307 "seek_data": false, 00:10:45.307 "copy": true, 00:10:45.307 "nvme_iov_md": false 00:10:45.307 }, 00:10:45.307 "memory_domains": [ 00:10:45.307 { 00:10:45.307 "dma_device_id": "system", 00:10:45.307 "dma_device_type": 1 00:10:45.307 }, 00:10:45.307 { 00:10:45.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.307 "dma_device_type": 2 00:10:45.307 } 00:10:45.307 ], 00:10:45.307 "driver_specific": {} 00:10:45.307 } 00:10:45.307 ] 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.307 BaseBdev4 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.307 [ 00:10:45.307 { 00:10:45.307 "name": "BaseBdev4", 00:10:45.307 "aliases": [ 00:10:45.307 "7a7da2be-e118-449d-8088-8ef73a5e9a52" 00:10:45.307 ], 00:10:45.307 "product_name": "Malloc disk", 00:10:45.307 "block_size": 512, 00:10:45.307 "num_blocks": 65536, 00:10:45.307 "uuid": "7a7da2be-e118-449d-8088-8ef73a5e9a52", 00:10:45.307 "assigned_rate_limits": { 00:10:45.307 "rw_ios_per_sec": 0, 00:10:45.307 "rw_mbytes_per_sec": 0, 00:10:45.307 "r_mbytes_per_sec": 0, 00:10:45.307 "w_mbytes_per_sec": 0 00:10:45.307 }, 00:10:45.307 "claimed": false, 00:10:45.307 "zoned": false, 00:10:45.307 "supported_io_types": { 00:10:45.307 "read": true, 00:10:45.307 "write": true, 00:10:45.307 "unmap": true, 00:10:45.307 "flush": true, 00:10:45.307 "reset": true, 00:10:45.307 "nvme_admin": false, 00:10:45.307 "nvme_io": false, 00:10:45.307 "nvme_io_md": false, 00:10:45.307 "write_zeroes": true, 00:10:45.307 "zcopy": true, 00:10:45.307 "get_zone_info": false, 00:10:45.307 "zone_management": false, 00:10:45.307 "zone_append": false, 00:10:45.307 "compare": false, 00:10:45.307 "compare_and_write": false, 00:10:45.307 "abort": true, 00:10:45.307 "seek_hole": false, 00:10:45.307 "seek_data": false, 00:10:45.307 "copy": true, 00:10:45.307 "nvme_iov_md": false 00:10:45.307 }, 00:10:45.307 "memory_domains": [ 00:10:45.307 { 00:10:45.307 "dma_device_id": "system", 00:10:45.307 "dma_device_type": 1 00:10:45.307 }, 00:10:45.307 { 00:10:45.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.307 "dma_device_type": 2 00:10:45.307 } 00:10:45.307 ], 00:10:45.307 "driver_specific": {} 00:10:45.307 } 00:10:45.307 ] 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.307 [2024-11-21 03:19:32.739035] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:45.307 [2024-11-21 03:19:32.739186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:45.307 [2024-11-21 03:19:32.739236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:45.307 [2024-11-21 03:19:32.741402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:45.307 [2024-11-21 03:19:32.741512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.307 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.308 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.308 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.308 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.308 "name": "Existed_Raid", 00:10:45.308 "uuid": "dac548a2-84ff-47d5-89ae-622ebb652d2d", 00:10:45.308 "strip_size_kb": 64, 00:10:45.308 "state": "configuring", 00:10:45.308 "raid_level": "raid0", 00:10:45.308 "superblock": true, 00:10:45.308 "num_base_bdevs": 4, 00:10:45.308 "num_base_bdevs_discovered": 3, 00:10:45.308 "num_base_bdevs_operational": 4, 00:10:45.308 "base_bdevs_list": [ 00:10:45.308 { 00:10:45.308 "name": "BaseBdev1", 00:10:45.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.308 "is_configured": false, 00:10:45.308 "data_offset": 0, 00:10:45.308 "data_size": 0 00:10:45.308 }, 00:10:45.308 { 00:10:45.308 "name": "BaseBdev2", 00:10:45.308 "uuid": "7fc1546a-ae38-4069-b846-7588056ff4bb", 00:10:45.308 "is_configured": true, 00:10:45.308 "data_offset": 2048, 00:10:45.308 "data_size": 63488 00:10:45.308 }, 00:10:45.308 { 00:10:45.308 "name": "BaseBdev3", 00:10:45.308 "uuid": "6222446a-724d-4040-876f-1ef730a42a56", 00:10:45.308 "is_configured": true, 00:10:45.308 "data_offset": 2048, 00:10:45.308 "data_size": 63488 00:10:45.308 }, 00:10:45.308 { 00:10:45.308 "name": "BaseBdev4", 00:10:45.308 "uuid": "7a7da2be-e118-449d-8088-8ef73a5e9a52", 00:10:45.308 "is_configured": true, 00:10:45.308 "data_offset": 2048, 00:10:45.308 "data_size": 63488 00:10:45.308 } 00:10:45.308 ] 00:10:45.308 }' 00:10:45.308 03:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.308 03:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.877 03:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:45.877 03:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.878 03:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.878 [2024-11-21 03:19:33.211149] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:45.878 03:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.878 03:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:45.878 03:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.878 03:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.878 03:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.878 03:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.878 03:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.878 03:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.878 03:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.878 03:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.878 03:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.878 03:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.878 03:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.878 03:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.878 03:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.878 03:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.878 03:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.878 "name": "Existed_Raid", 00:10:45.878 "uuid": "dac548a2-84ff-47d5-89ae-622ebb652d2d", 00:10:45.878 "strip_size_kb": 64, 00:10:45.878 "state": "configuring", 00:10:45.878 "raid_level": "raid0", 00:10:45.878 "superblock": true, 00:10:45.878 "num_base_bdevs": 4, 00:10:45.878 "num_base_bdevs_discovered": 2, 00:10:45.878 "num_base_bdevs_operational": 4, 00:10:45.878 "base_bdevs_list": [ 00:10:45.878 { 00:10:45.878 "name": "BaseBdev1", 00:10:45.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.878 "is_configured": false, 00:10:45.878 "data_offset": 0, 00:10:45.878 "data_size": 0 00:10:45.878 }, 00:10:45.878 { 00:10:45.878 "name": null, 00:10:45.878 "uuid": "7fc1546a-ae38-4069-b846-7588056ff4bb", 00:10:45.878 "is_configured": false, 00:10:45.878 "data_offset": 0, 00:10:45.878 "data_size": 63488 00:10:45.878 }, 00:10:45.878 { 00:10:45.878 "name": "BaseBdev3", 00:10:45.878 "uuid": "6222446a-724d-4040-876f-1ef730a42a56", 00:10:45.878 "is_configured": true, 00:10:45.878 "data_offset": 2048, 00:10:45.878 "data_size": 63488 00:10:45.878 }, 00:10:45.878 { 00:10:45.878 "name": "BaseBdev4", 00:10:45.878 "uuid": "7a7da2be-e118-449d-8088-8ef73a5e9a52", 00:10:45.878 "is_configured": true, 00:10:45.878 "data_offset": 2048, 00:10:45.878 "data_size": 63488 00:10:45.878 } 00:10:45.878 ] 00:10:45.878 }' 00:10:45.878 03:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.878 03:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.137 03:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.137 03:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.137 03:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.137 03:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:46.137 03:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.397 03:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:46.397 03:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:46.397 03:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.397 03:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.397 [2024-11-21 03:19:33.746465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:46.397 BaseBdev1 00:10:46.397 03:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.397 03:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:46.397 03:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:46.397 03:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:46.397 03:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:46.397 03:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:46.397 03:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:46.397 03:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:46.397 03:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.397 03:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.397 03:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.397 03:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:46.397 03:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.397 03:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.397 [ 00:10:46.397 { 00:10:46.397 "name": "BaseBdev1", 00:10:46.397 "aliases": [ 00:10:46.397 "7959248d-0a90-45a8-8751-e304afeed873" 00:10:46.397 ], 00:10:46.397 "product_name": "Malloc disk", 00:10:46.397 "block_size": 512, 00:10:46.397 "num_blocks": 65536, 00:10:46.397 "uuid": "7959248d-0a90-45a8-8751-e304afeed873", 00:10:46.397 "assigned_rate_limits": { 00:10:46.397 "rw_ios_per_sec": 0, 00:10:46.397 "rw_mbytes_per_sec": 0, 00:10:46.397 "r_mbytes_per_sec": 0, 00:10:46.397 "w_mbytes_per_sec": 0 00:10:46.397 }, 00:10:46.397 "claimed": true, 00:10:46.397 "claim_type": "exclusive_write", 00:10:46.397 "zoned": false, 00:10:46.397 "supported_io_types": { 00:10:46.397 "read": true, 00:10:46.397 "write": true, 00:10:46.397 "unmap": true, 00:10:46.397 "flush": true, 00:10:46.397 "reset": true, 00:10:46.397 "nvme_admin": false, 00:10:46.397 "nvme_io": false, 00:10:46.397 "nvme_io_md": false, 00:10:46.397 "write_zeroes": true, 00:10:46.397 "zcopy": true, 00:10:46.397 "get_zone_info": false, 00:10:46.397 "zone_management": false, 00:10:46.397 "zone_append": false, 00:10:46.397 "compare": false, 00:10:46.397 "compare_and_write": false, 00:10:46.397 "abort": true, 00:10:46.397 "seek_hole": false, 00:10:46.397 "seek_data": false, 00:10:46.397 "copy": true, 00:10:46.397 "nvme_iov_md": false 00:10:46.397 }, 00:10:46.397 "memory_domains": [ 00:10:46.397 { 00:10:46.397 "dma_device_id": "system", 00:10:46.397 "dma_device_type": 1 00:10:46.397 }, 00:10:46.397 { 00:10:46.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.397 "dma_device_type": 2 00:10:46.397 } 00:10:46.397 ], 00:10:46.397 "driver_specific": {} 00:10:46.397 } 00:10:46.397 ] 00:10:46.397 03:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.397 03:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:46.397 03:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:46.397 03:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.397 03:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.397 03:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.398 03:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.398 03:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.398 03:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.398 03:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.398 03:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.398 03:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.398 03:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.398 03:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.398 03:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.398 03:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.398 03:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.398 03:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.398 "name": "Existed_Raid", 00:10:46.398 "uuid": "dac548a2-84ff-47d5-89ae-622ebb652d2d", 00:10:46.398 "strip_size_kb": 64, 00:10:46.398 "state": "configuring", 00:10:46.398 "raid_level": "raid0", 00:10:46.398 "superblock": true, 00:10:46.398 "num_base_bdevs": 4, 00:10:46.398 "num_base_bdevs_discovered": 3, 00:10:46.398 "num_base_bdevs_operational": 4, 00:10:46.398 "base_bdevs_list": [ 00:10:46.398 { 00:10:46.398 "name": "BaseBdev1", 00:10:46.398 "uuid": "7959248d-0a90-45a8-8751-e304afeed873", 00:10:46.398 "is_configured": true, 00:10:46.398 "data_offset": 2048, 00:10:46.398 "data_size": 63488 00:10:46.398 }, 00:10:46.398 { 00:10:46.398 "name": null, 00:10:46.398 "uuid": "7fc1546a-ae38-4069-b846-7588056ff4bb", 00:10:46.398 "is_configured": false, 00:10:46.398 "data_offset": 0, 00:10:46.398 "data_size": 63488 00:10:46.398 }, 00:10:46.398 { 00:10:46.398 "name": "BaseBdev3", 00:10:46.398 "uuid": "6222446a-724d-4040-876f-1ef730a42a56", 00:10:46.398 "is_configured": true, 00:10:46.398 "data_offset": 2048, 00:10:46.398 "data_size": 63488 00:10:46.398 }, 00:10:46.398 { 00:10:46.398 "name": "BaseBdev4", 00:10:46.398 "uuid": "7a7da2be-e118-449d-8088-8ef73a5e9a52", 00:10:46.398 "is_configured": true, 00:10:46.398 "data_offset": 2048, 00:10:46.398 "data_size": 63488 00:10:46.398 } 00:10:46.398 ] 00:10:46.398 }' 00:10:46.398 03:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.398 03:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.966 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.967 03:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.967 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:46.967 03:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.967 03:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.967 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:46.967 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:46.967 03:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.967 03:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.967 [2024-11-21 03:19:34.298735] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:46.967 03:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.967 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:46.967 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.967 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.967 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.967 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.967 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.967 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.967 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.967 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.967 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.967 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.967 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.967 03:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.967 03:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.967 03:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.967 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.967 "name": "Existed_Raid", 00:10:46.967 "uuid": "dac548a2-84ff-47d5-89ae-622ebb652d2d", 00:10:46.967 "strip_size_kb": 64, 00:10:46.967 "state": "configuring", 00:10:46.967 "raid_level": "raid0", 00:10:46.967 "superblock": true, 00:10:46.967 "num_base_bdevs": 4, 00:10:46.967 "num_base_bdevs_discovered": 2, 00:10:46.967 "num_base_bdevs_operational": 4, 00:10:46.967 "base_bdevs_list": [ 00:10:46.967 { 00:10:46.967 "name": "BaseBdev1", 00:10:46.967 "uuid": "7959248d-0a90-45a8-8751-e304afeed873", 00:10:46.967 "is_configured": true, 00:10:46.967 "data_offset": 2048, 00:10:46.967 "data_size": 63488 00:10:46.967 }, 00:10:46.967 { 00:10:46.967 "name": null, 00:10:46.967 "uuid": "7fc1546a-ae38-4069-b846-7588056ff4bb", 00:10:46.967 "is_configured": false, 00:10:46.967 "data_offset": 0, 00:10:46.967 "data_size": 63488 00:10:46.967 }, 00:10:46.967 { 00:10:46.967 "name": null, 00:10:46.967 "uuid": "6222446a-724d-4040-876f-1ef730a42a56", 00:10:46.967 "is_configured": false, 00:10:46.967 "data_offset": 0, 00:10:46.967 "data_size": 63488 00:10:46.967 }, 00:10:46.967 { 00:10:46.967 "name": "BaseBdev4", 00:10:46.967 "uuid": "7a7da2be-e118-449d-8088-8ef73a5e9a52", 00:10:46.967 "is_configured": true, 00:10:46.967 "data_offset": 2048, 00:10:46.967 "data_size": 63488 00:10:46.967 } 00:10:46.967 ] 00:10:46.967 }' 00:10:46.967 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.967 03:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.226 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.226 03:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.226 03:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.226 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:47.226 03:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.486 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:47.486 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:47.486 03:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.486 03:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.486 [2024-11-21 03:19:34.814983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:47.486 03:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.486 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:47.486 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.486 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.486 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:47.486 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.486 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.486 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.486 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.486 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.486 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.486 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.486 03:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.486 03:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.486 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.486 03:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.486 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.486 "name": "Existed_Raid", 00:10:47.486 "uuid": "dac548a2-84ff-47d5-89ae-622ebb652d2d", 00:10:47.486 "strip_size_kb": 64, 00:10:47.486 "state": "configuring", 00:10:47.486 "raid_level": "raid0", 00:10:47.486 "superblock": true, 00:10:47.486 "num_base_bdevs": 4, 00:10:47.486 "num_base_bdevs_discovered": 3, 00:10:47.486 "num_base_bdevs_operational": 4, 00:10:47.486 "base_bdevs_list": [ 00:10:47.486 { 00:10:47.486 "name": "BaseBdev1", 00:10:47.486 "uuid": "7959248d-0a90-45a8-8751-e304afeed873", 00:10:47.486 "is_configured": true, 00:10:47.486 "data_offset": 2048, 00:10:47.486 "data_size": 63488 00:10:47.486 }, 00:10:47.486 { 00:10:47.486 "name": null, 00:10:47.486 "uuid": "7fc1546a-ae38-4069-b846-7588056ff4bb", 00:10:47.486 "is_configured": false, 00:10:47.486 "data_offset": 0, 00:10:47.486 "data_size": 63488 00:10:47.486 }, 00:10:47.486 { 00:10:47.486 "name": "BaseBdev3", 00:10:47.486 "uuid": "6222446a-724d-4040-876f-1ef730a42a56", 00:10:47.486 "is_configured": true, 00:10:47.486 "data_offset": 2048, 00:10:47.486 "data_size": 63488 00:10:47.486 }, 00:10:47.486 { 00:10:47.486 "name": "BaseBdev4", 00:10:47.486 "uuid": "7a7da2be-e118-449d-8088-8ef73a5e9a52", 00:10:47.486 "is_configured": true, 00:10:47.486 "data_offset": 2048, 00:10:47.486 "data_size": 63488 00:10:47.486 } 00:10:47.486 ] 00:10:47.486 }' 00:10:47.486 03:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.486 03:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.746 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.746 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:47.746 03:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.746 03:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.746 03:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.746 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:47.746 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:47.746 03:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.746 03:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.746 [2024-11-21 03:19:35.247139] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:47.746 03:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.746 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:47.746 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.746 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.746 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:47.746 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.746 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.746 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.746 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.746 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.746 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.746 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.746 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.746 03:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.746 03:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.746 03:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.746 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.746 "name": "Existed_Raid", 00:10:47.746 "uuid": "dac548a2-84ff-47d5-89ae-622ebb652d2d", 00:10:47.746 "strip_size_kb": 64, 00:10:47.746 "state": "configuring", 00:10:47.746 "raid_level": "raid0", 00:10:47.746 "superblock": true, 00:10:47.746 "num_base_bdevs": 4, 00:10:47.746 "num_base_bdevs_discovered": 2, 00:10:47.746 "num_base_bdevs_operational": 4, 00:10:47.746 "base_bdevs_list": [ 00:10:47.746 { 00:10:47.746 "name": null, 00:10:47.746 "uuid": "7959248d-0a90-45a8-8751-e304afeed873", 00:10:47.746 "is_configured": false, 00:10:47.746 "data_offset": 0, 00:10:47.746 "data_size": 63488 00:10:47.746 }, 00:10:47.746 { 00:10:47.746 "name": null, 00:10:47.746 "uuid": "7fc1546a-ae38-4069-b846-7588056ff4bb", 00:10:47.746 "is_configured": false, 00:10:47.746 "data_offset": 0, 00:10:47.746 "data_size": 63488 00:10:47.746 }, 00:10:47.746 { 00:10:47.746 "name": "BaseBdev3", 00:10:47.746 "uuid": "6222446a-724d-4040-876f-1ef730a42a56", 00:10:47.746 "is_configured": true, 00:10:47.746 "data_offset": 2048, 00:10:47.746 "data_size": 63488 00:10:47.746 }, 00:10:47.746 { 00:10:47.746 "name": "BaseBdev4", 00:10:47.746 "uuid": "7a7da2be-e118-449d-8088-8ef73a5e9a52", 00:10:47.746 "is_configured": true, 00:10:47.746 "data_offset": 2048, 00:10:47.746 "data_size": 63488 00:10:47.746 } 00:10:47.746 ] 00:10:47.746 }' 00:10:47.746 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.746 03:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.313 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.313 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:48.313 03:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.313 03:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.313 03:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.313 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:48.313 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:48.313 03:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.313 03:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.313 [2024-11-21 03:19:35.745898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:48.313 03:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.313 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:48.313 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.313 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.313 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:48.313 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.313 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.313 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.313 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.313 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.313 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.313 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.313 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.313 03:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.313 03:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.313 03:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.313 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.314 "name": "Existed_Raid", 00:10:48.314 "uuid": "dac548a2-84ff-47d5-89ae-622ebb652d2d", 00:10:48.314 "strip_size_kb": 64, 00:10:48.314 "state": "configuring", 00:10:48.314 "raid_level": "raid0", 00:10:48.314 "superblock": true, 00:10:48.314 "num_base_bdevs": 4, 00:10:48.314 "num_base_bdevs_discovered": 3, 00:10:48.314 "num_base_bdevs_operational": 4, 00:10:48.314 "base_bdevs_list": [ 00:10:48.314 { 00:10:48.314 "name": null, 00:10:48.314 "uuid": "7959248d-0a90-45a8-8751-e304afeed873", 00:10:48.314 "is_configured": false, 00:10:48.314 "data_offset": 0, 00:10:48.314 "data_size": 63488 00:10:48.314 }, 00:10:48.314 { 00:10:48.314 "name": "BaseBdev2", 00:10:48.314 "uuid": "7fc1546a-ae38-4069-b846-7588056ff4bb", 00:10:48.314 "is_configured": true, 00:10:48.314 "data_offset": 2048, 00:10:48.314 "data_size": 63488 00:10:48.314 }, 00:10:48.314 { 00:10:48.314 "name": "BaseBdev3", 00:10:48.314 "uuid": "6222446a-724d-4040-876f-1ef730a42a56", 00:10:48.314 "is_configured": true, 00:10:48.314 "data_offset": 2048, 00:10:48.314 "data_size": 63488 00:10:48.314 }, 00:10:48.314 { 00:10:48.314 "name": "BaseBdev4", 00:10:48.314 "uuid": "7a7da2be-e118-449d-8088-8ef73a5e9a52", 00:10:48.314 "is_configured": true, 00:10:48.314 "data_offset": 2048, 00:10:48.314 "data_size": 63488 00:10:48.314 } 00:10:48.314 ] 00:10:48.314 }' 00:10:48.314 03:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.314 03:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7959248d-0a90-45a8-8751-e304afeed873 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.915 [2024-11-21 03:19:36.341331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:48.915 [2024-11-21 03:19:36.341647] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:48.915 [2024-11-21 03:19:36.341708] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:48.915 [2024-11-21 03:19:36.342004] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:10:48.915 NewBaseBdev 00:10:48.915 [2024-11-21 03:19:36.342196] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:48.915 [2024-11-21 03:19:36.342260] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:48.915 [2024-11-21 03:19:36.342409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.915 [ 00:10:48.915 { 00:10:48.915 "name": "NewBaseBdev", 00:10:48.915 "aliases": [ 00:10:48.915 "7959248d-0a90-45a8-8751-e304afeed873" 00:10:48.915 ], 00:10:48.915 "product_name": "Malloc disk", 00:10:48.915 "block_size": 512, 00:10:48.915 "num_blocks": 65536, 00:10:48.915 "uuid": "7959248d-0a90-45a8-8751-e304afeed873", 00:10:48.915 "assigned_rate_limits": { 00:10:48.915 "rw_ios_per_sec": 0, 00:10:48.915 "rw_mbytes_per_sec": 0, 00:10:48.915 "r_mbytes_per_sec": 0, 00:10:48.915 "w_mbytes_per_sec": 0 00:10:48.915 }, 00:10:48.915 "claimed": true, 00:10:48.915 "claim_type": "exclusive_write", 00:10:48.915 "zoned": false, 00:10:48.915 "supported_io_types": { 00:10:48.915 "read": true, 00:10:48.915 "write": true, 00:10:48.915 "unmap": true, 00:10:48.915 "flush": true, 00:10:48.915 "reset": true, 00:10:48.915 "nvme_admin": false, 00:10:48.915 "nvme_io": false, 00:10:48.915 "nvme_io_md": false, 00:10:48.915 "write_zeroes": true, 00:10:48.915 "zcopy": true, 00:10:48.915 "get_zone_info": false, 00:10:48.915 "zone_management": false, 00:10:48.915 "zone_append": false, 00:10:48.915 "compare": false, 00:10:48.915 "compare_and_write": false, 00:10:48.915 "abort": true, 00:10:48.915 "seek_hole": false, 00:10:48.915 "seek_data": false, 00:10:48.915 "copy": true, 00:10:48.915 "nvme_iov_md": false 00:10:48.915 }, 00:10:48.915 "memory_domains": [ 00:10:48.915 { 00:10:48.915 "dma_device_id": "system", 00:10:48.915 "dma_device_type": 1 00:10:48.915 }, 00:10:48.915 { 00:10:48.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.915 "dma_device_type": 2 00:10:48.915 } 00:10:48.915 ], 00:10:48.915 "driver_specific": {} 00:10:48.915 } 00:10:48.915 ] 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.915 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.916 03:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.916 03:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.916 03:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.916 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.916 "name": "Existed_Raid", 00:10:48.916 "uuid": "dac548a2-84ff-47d5-89ae-622ebb652d2d", 00:10:48.916 "strip_size_kb": 64, 00:10:48.916 "state": "online", 00:10:48.916 "raid_level": "raid0", 00:10:48.916 "superblock": true, 00:10:48.916 "num_base_bdevs": 4, 00:10:48.916 "num_base_bdevs_discovered": 4, 00:10:48.916 "num_base_bdevs_operational": 4, 00:10:48.916 "base_bdevs_list": [ 00:10:48.916 { 00:10:48.916 "name": "NewBaseBdev", 00:10:48.916 "uuid": "7959248d-0a90-45a8-8751-e304afeed873", 00:10:48.916 "is_configured": true, 00:10:48.916 "data_offset": 2048, 00:10:48.916 "data_size": 63488 00:10:48.916 }, 00:10:48.916 { 00:10:48.916 "name": "BaseBdev2", 00:10:48.916 "uuid": "7fc1546a-ae38-4069-b846-7588056ff4bb", 00:10:48.916 "is_configured": true, 00:10:48.916 "data_offset": 2048, 00:10:48.916 "data_size": 63488 00:10:48.916 }, 00:10:48.916 { 00:10:48.916 "name": "BaseBdev3", 00:10:48.916 "uuid": "6222446a-724d-4040-876f-1ef730a42a56", 00:10:48.916 "is_configured": true, 00:10:48.916 "data_offset": 2048, 00:10:48.916 "data_size": 63488 00:10:48.916 }, 00:10:48.916 { 00:10:48.916 "name": "BaseBdev4", 00:10:48.916 "uuid": "7a7da2be-e118-449d-8088-8ef73a5e9a52", 00:10:48.916 "is_configured": true, 00:10:48.916 "data_offset": 2048, 00:10:48.916 "data_size": 63488 00:10:48.916 } 00:10:48.916 ] 00:10:48.916 }' 00:10:48.916 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.916 03:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.508 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:49.508 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:49.508 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:49.508 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:49.508 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:49.508 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:49.508 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:49.508 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:49.508 03:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.508 03:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.508 [2024-11-21 03:19:36.853934] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:49.508 03:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.508 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:49.508 "name": "Existed_Raid", 00:10:49.508 "aliases": [ 00:10:49.508 "dac548a2-84ff-47d5-89ae-622ebb652d2d" 00:10:49.508 ], 00:10:49.508 "product_name": "Raid Volume", 00:10:49.508 "block_size": 512, 00:10:49.508 "num_blocks": 253952, 00:10:49.508 "uuid": "dac548a2-84ff-47d5-89ae-622ebb652d2d", 00:10:49.508 "assigned_rate_limits": { 00:10:49.508 "rw_ios_per_sec": 0, 00:10:49.508 "rw_mbytes_per_sec": 0, 00:10:49.508 "r_mbytes_per_sec": 0, 00:10:49.508 "w_mbytes_per_sec": 0 00:10:49.508 }, 00:10:49.508 "claimed": false, 00:10:49.508 "zoned": false, 00:10:49.508 "supported_io_types": { 00:10:49.508 "read": true, 00:10:49.508 "write": true, 00:10:49.508 "unmap": true, 00:10:49.508 "flush": true, 00:10:49.508 "reset": true, 00:10:49.508 "nvme_admin": false, 00:10:49.508 "nvme_io": false, 00:10:49.508 "nvme_io_md": false, 00:10:49.508 "write_zeroes": true, 00:10:49.508 "zcopy": false, 00:10:49.508 "get_zone_info": false, 00:10:49.508 "zone_management": false, 00:10:49.508 "zone_append": false, 00:10:49.508 "compare": false, 00:10:49.508 "compare_and_write": false, 00:10:49.508 "abort": false, 00:10:49.508 "seek_hole": false, 00:10:49.508 "seek_data": false, 00:10:49.508 "copy": false, 00:10:49.508 "nvme_iov_md": false 00:10:49.508 }, 00:10:49.508 "memory_domains": [ 00:10:49.508 { 00:10:49.508 "dma_device_id": "system", 00:10:49.508 "dma_device_type": 1 00:10:49.508 }, 00:10:49.508 { 00:10:49.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.508 "dma_device_type": 2 00:10:49.508 }, 00:10:49.508 { 00:10:49.508 "dma_device_id": "system", 00:10:49.508 "dma_device_type": 1 00:10:49.508 }, 00:10:49.508 { 00:10:49.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.508 "dma_device_type": 2 00:10:49.508 }, 00:10:49.508 { 00:10:49.508 "dma_device_id": "system", 00:10:49.508 "dma_device_type": 1 00:10:49.508 }, 00:10:49.508 { 00:10:49.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.508 "dma_device_type": 2 00:10:49.508 }, 00:10:49.508 { 00:10:49.508 "dma_device_id": "system", 00:10:49.508 "dma_device_type": 1 00:10:49.508 }, 00:10:49.508 { 00:10:49.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.508 "dma_device_type": 2 00:10:49.508 } 00:10:49.508 ], 00:10:49.508 "driver_specific": { 00:10:49.508 "raid": { 00:10:49.508 "uuid": "dac548a2-84ff-47d5-89ae-622ebb652d2d", 00:10:49.508 "strip_size_kb": 64, 00:10:49.508 "state": "online", 00:10:49.508 "raid_level": "raid0", 00:10:49.508 "superblock": true, 00:10:49.508 "num_base_bdevs": 4, 00:10:49.508 "num_base_bdevs_discovered": 4, 00:10:49.508 "num_base_bdevs_operational": 4, 00:10:49.508 "base_bdevs_list": [ 00:10:49.508 { 00:10:49.508 "name": "NewBaseBdev", 00:10:49.508 "uuid": "7959248d-0a90-45a8-8751-e304afeed873", 00:10:49.508 "is_configured": true, 00:10:49.508 "data_offset": 2048, 00:10:49.508 "data_size": 63488 00:10:49.508 }, 00:10:49.508 { 00:10:49.508 "name": "BaseBdev2", 00:10:49.508 "uuid": "7fc1546a-ae38-4069-b846-7588056ff4bb", 00:10:49.508 "is_configured": true, 00:10:49.508 "data_offset": 2048, 00:10:49.508 "data_size": 63488 00:10:49.508 }, 00:10:49.508 { 00:10:49.508 "name": "BaseBdev3", 00:10:49.508 "uuid": "6222446a-724d-4040-876f-1ef730a42a56", 00:10:49.508 "is_configured": true, 00:10:49.508 "data_offset": 2048, 00:10:49.508 "data_size": 63488 00:10:49.508 }, 00:10:49.508 { 00:10:49.508 "name": "BaseBdev4", 00:10:49.508 "uuid": "7a7da2be-e118-449d-8088-8ef73a5e9a52", 00:10:49.508 "is_configured": true, 00:10:49.508 "data_offset": 2048, 00:10:49.508 "data_size": 63488 00:10:49.508 } 00:10:49.508 ] 00:10:49.508 } 00:10:49.508 } 00:10:49.508 }' 00:10:49.508 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:49.508 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:49.508 BaseBdev2 00:10:49.508 BaseBdev3 00:10:49.508 BaseBdev4' 00:10:49.509 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.509 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:49.509 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.509 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:49.509 03:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.509 03:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.509 03:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.509 03:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.509 03:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.509 03:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.509 03:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.509 03:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.509 03:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:49.509 03:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.509 03:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.509 03:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.769 03:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.769 03:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.769 03:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.769 03:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:49.769 03:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.769 03:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.769 03:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.769 03:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.769 03:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.769 03:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.769 03:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.769 03:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:49.769 03:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.769 03:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.769 03:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.769 03:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.769 03:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.769 03:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.769 03:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:49.769 03:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.769 03:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.769 [2024-11-21 03:19:37.193646] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:49.769 [2024-11-21 03:19:37.193784] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:49.769 [2024-11-21 03:19:37.193883] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.769 [2024-11-21 03:19:37.193956] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:49.769 [2024-11-21 03:19:37.193978] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:49.769 03:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.769 03:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83049 00:10:49.769 03:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83049 ']' 00:10:49.769 03:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83049 00:10:49.769 03:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:49.769 03:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:49.769 03:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83049 00:10:49.769 03:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:49.769 03:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:49.769 03:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83049' 00:10:49.769 killing process with pid 83049 00:10:49.769 03:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83049 00:10:49.769 [2024-11-21 03:19:37.244094] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:49.769 03:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83049 00:10:49.769 [2024-11-21 03:19:37.287716] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:50.029 03:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:50.029 00:10:50.029 real 0m9.944s 00:10:50.029 user 0m16.861s 00:10:50.029 sys 0m2.237s 00:10:50.029 03:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.029 03:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.029 ************************************ 00:10:50.029 END TEST raid_state_function_test_sb 00:10:50.029 ************************************ 00:10:50.029 03:19:37 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:50.029 03:19:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:50.029 03:19:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.029 03:19:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:50.289 ************************************ 00:10:50.289 START TEST raid_superblock_test 00:10:50.289 ************************************ 00:10:50.289 03:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:10:50.289 03:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:50.289 03:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:50.289 03:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:50.289 03:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:50.289 03:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:50.289 03:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:50.289 03:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:50.289 03:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:50.289 03:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:50.289 03:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:50.289 03:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:50.289 03:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:50.289 03:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:50.289 03:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:50.289 03:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:50.289 03:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:50.289 03:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83697 00:10:50.289 03:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:50.289 03:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83697 00:10:50.289 03:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 83697 ']' 00:10:50.289 03:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.289 03:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.289 03:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.289 03:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.289 03:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.289 [2024-11-21 03:19:37.697575] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:10:50.289 [2024-11-21 03:19:37.697813] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83697 ] 00:10:50.289 [2024-11-21 03:19:37.842505] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:50.548 [2024-11-21 03:19:37.879812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.548 [2024-11-21 03:19:37.910708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.548 [2024-11-21 03:19:37.954119] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:50.548 [2024-11-21 03:19:37.954238] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.114 03:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:51.114 03:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:51.114 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:51.114 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:51.114 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:51.114 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:51.114 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:51.114 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:51.114 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:51.114 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:51.114 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:51.114 03:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.114 03:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.114 malloc1 00:10:51.114 03:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.114 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:51.114 03:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.114 03:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.114 [2024-11-21 03:19:38.630689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:51.114 [2024-11-21 03:19:38.630870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.114 [2024-11-21 03:19:38.630922] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:51.114 [2024-11-21 03:19:38.630969] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.114 [2024-11-21 03:19:38.633427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.114 [2024-11-21 03:19:38.633535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:51.114 pt1 00:10:51.114 03:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.114 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:51.114 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:51.114 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:51.114 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:51.114 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:51.114 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:51.114 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:51.114 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:51.114 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:51.114 03:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.114 03:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.114 malloc2 00:10:51.114 03:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.114 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:51.114 03:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.114 03:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.114 [2024-11-21 03:19:38.659913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:51.114 [2024-11-21 03:19:38.660107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.114 [2024-11-21 03:19:38.660154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:51.114 [2024-11-21 03:19:38.660211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.114 [2024-11-21 03:19:38.662432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.114 [2024-11-21 03:19:38.662520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:51.114 pt2 00:10:51.115 03:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.115 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:51.115 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:51.115 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:51.115 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:51.115 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:51.115 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:51.115 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:51.115 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:51.115 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:51.115 03:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.115 03:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.373 malloc3 00:10:51.373 03:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.373 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:51.373 03:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.373 03:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.373 [2024-11-21 03:19:38.689711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:51.373 [2024-11-21 03:19:38.689869] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.373 [2024-11-21 03:19:38.689912] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:51.373 [2024-11-21 03:19:38.689944] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.373 [2024-11-21 03:19:38.692413] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.373 [2024-11-21 03:19:38.692505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:51.373 pt3 00:10:51.373 03:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.373 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:51.373 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:51.373 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:51.373 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:51.373 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:51.373 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:51.373 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:51.373 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:51.373 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:51.373 03:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.373 03:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.373 malloc4 00:10:51.373 03:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.373 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:51.373 03:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.373 03:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.373 [2024-11-21 03:19:38.737975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:51.373 [2024-11-21 03:19:38.738165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.374 [2024-11-21 03:19:38.738225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:51.374 [2024-11-21 03:19:38.738273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.374 [2024-11-21 03:19:38.741304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.374 [2024-11-21 03:19:38.741415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:51.374 pt4 00:10:51.374 03:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.374 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:51.374 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:51.374 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:51.374 03:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.374 03:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.374 [2024-11-21 03:19:38.750145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:51.374 [2024-11-21 03:19:38.752225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:51.374 [2024-11-21 03:19:38.752312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:51.374 [2024-11-21 03:19:38.752389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:51.374 [2024-11-21 03:19:38.752569] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:10:51.374 [2024-11-21 03:19:38.752581] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:51.374 [2024-11-21 03:19:38.752848] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:10:51.374 [2024-11-21 03:19:38.752992] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:10:51.374 [2024-11-21 03:19:38.753005] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:10:51.374 [2024-11-21 03:19:38.753151] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:51.374 03:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.374 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:51.374 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:51.374 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.374 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.374 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.374 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.374 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.374 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.374 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.374 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.374 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.374 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.374 03:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.374 03:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.374 03:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.374 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.374 "name": "raid_bdev1", 00:10:51.374 "uuid": "92a85c30-d330-490e-9ca7-c4e96d51a4af", 00:10:51.374 "strip_size_kb": 64, 00:10:51.374 "state": "online", 00:10:51.374 "raid_level": "raid0", 00:10:51.374 "superblock": true, 00:10:51.374 "num_base_bdevs": 4, 00:10:51.374 "num_base_bdevs_discovered": 4, 00:10:51.374 "num_base_bdevs_operational": 4, 00:10:51.374 "base_bdevs_list": [ 00:10:51.374 { 00:10:51.374 "name": "pt1", 00:10:51.374 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:51.374 "is_configured": true, 00:10:51.374 "data_offset": 2048, 00:10:51.374 "data_size": 63488 00:10:51.374 }, 00:10:51.374 { 00:10:51.374 "name": "pt2", 00:10:51.374 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:51.374 "is_configured": true, 00:10:51.374 "data_offset": 2048, 00:10:51.374 "data_size": 63488 00:10:51.374 }, 00:10:51.374 { 00:10:51.374 "name": "pt3", 00:10:51.374 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:51.374 "is_configured": true, 00:10:51.374 "data_offset": 2048, 00:10:51.374 "data_size": 63488 00:10:51.374 }, 00:10:51.374 { 00:10:51.374 "name": "pt4", 00:10:51.374 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:51.374 "is_configured": true, 00:10:51.374 "data_offset": 2048, 00:10:51.374 "data_size": 63488 00:10:51.374 } 00:10:51.374 ] 00:10:51.374 }' 00:10:51.374 03:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.374 03:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.941 [2024-11-21 03:19:39.218611] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:51.941 "name": "raid_bdev1", 00:10:51.941 "aliases": [ 00:10:51.941 "92a85c30-d330-490e-9ca7-c4e96d51a4af" 00:10:51.941 ], 00:10:51.941 "product_name": "Raid Volume", 00:10:51.941 "block_size": 512, 00:10:51.941 "num_blocks": 253952, 00:10:51.941 "uuid": "92a85c30-d330-490e-9ca7-c4e96d51a4af", 00:10:51.941 "assigned_rate_limits": { 00:10:51.941 "rw_ios_per_sec": 0, 00:10:51.941 "rw_mbytes_per_sec": 0, 00:10:51.941 "r_mbytes_per_sec": 0, 00:10:51.941 "w_mbytes_per_sec": 0 00:10:51.941 }, 00:10:51.941 "claimed": false, 00:10:51.941 "zoned": false, 00:10:51.941 "supported_io_types": { 00:10:51.941 "read": true, 00:10:51.941 "write": true, 00:10:51.941 "unmap": true, 00:10:51.941 "flush": true, 00:10:51.941 "reset": true, 00:10:51.941 "nvme_admin": false, 00:10:51.941 "nvme_io": false, 00:10:51.941 "nvme_io_md": false, 00:10:51.941 "write_zeroes": true, 00:10:51.941 "zcopy": false, 00:10:51.941 "get_zone_info": false, 00:10:51.941 "zone_management": false, 00:10:51.941 "zone_append": false, 00:10:51.941 "compare": false, 00:10:51.941 "compare_and_write": false, 00:10:51.941 "abort": false, 00:10:51.941 "seek_hole": false, 00:10:51.941 "seek_data": false, 00:10:51.941 "copy": false, 00:10:51.941 "nvme_iov_md": false 00:10:51.941 }, 00:10:51.941 "memory_domains": [ 00:10:51.941 { 00:10:51.941 "dma_device_id": "system", 00:10:51.941 "dma_device_type": 1 00:10:51.941 }, 00:10:51.941 { 00:10:51.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.941 "dma_device_type": 2 00:10:51.941 }, 00:10:51.941 { 00:10:51.941 "dma_device_id": "system", 00:10:51.941 "dma_device_type": 1 00:10:51.941 }, 00:10:51.941 { 00:10:51.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.941 "dma_device_type": 2 00:10:51.941 }, 00:10:51.941 { 00:10:51.941 "dma_device_id": "system", 00:10:51.941 "dma_device_type": 1 00:10:51.941 }, 00:10:51.941 { 00:10:51.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.941 "dma_device_type": 2 00:10:51.941 }, 00:10:51.941 { 00:10:51.941 "dma_device_id": "system", 00:10:51.941 "dma_device_type": 1 00:10:51.941 }, 00:10:51.941 { 00:10:51.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.941 "dma_device_type": 2 00:10:51.941 } 00:10:51.941 ], 00:10:51.941 "driver_specific": { 00:10:51.941 "raid": { 00:10:51.941 "uuid": "92a85c30-d330-490e-9ca7-c4e96d51a4af", 00:10:51.941 "strip_size_kb": 64, 00:10:51.941 "state": "online", 00:10:51.941 "raid_level": "raid0", 00:10:51.941 "superblock": true, 00:10:51.941 "num_base_bdevs": 4, 00:10:51.941 "num_base_bdevs_discovered": 4, 00:10:51.941 "num_base_bdevs_operational": 4, 00:10:51.941 "base_bdevs_list": [ 00:10:51.941 { 00:10:51.941 "name": "pt1", 00:10:51.941 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:51.941 "is_configured": true, 00:10:51.941 "data_offset": 2048, 00:10:51.941 "data_size": 63488 00:10:51.941 }, 00:10:51.941 { 00:10:51.941 "name": "pt2", 00:10:51.941 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:51.941 "is_configured": true, 00:10:51.941 "data_offset": 2048, 00:10:51.941 "data_size": 63488 00:10:51.941 }, 00:10:51.941 { 00:10:51.941 "name": "pt3", 00:10:51.941 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:51.941 "is_configured": true, 00:10:51.941 "data_offset": 2048, 00:10:51.941 "data_size": 63488 00:10:51.941 }, 00:10:51.941 { 00:10:51.941 "name": "pt4", 00:10:51.941 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:51.941 "is_configured": true, 00:10:51.941 "data_offset": 2048, 00:10:51.941 "data_size": 63488 00:10:51.941 } 00:10:51.941 ] 00:10:51.941 } 00:10:51.941 } 00:10:51.941 }' 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:51.941 pt2 00:10:51.941 pt3 00:10:51.941 pt4' 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:51.941 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.942 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.942 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.942 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.201 [2024-11-21 03:19:39.570709] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=92a85c30-d330-490e-9ca7-c4e96d51a4af 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 92a85c30-d330-490e-9ca7-c4e96d51a4af ']' 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.201 [2024-11-21 03:19:39.618335] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:52.201 [2024-11-21 03:19:39.618471] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:52.201 [2024-11-21 03:19:39.618606] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:52.201 [2024-11-21 03:19:39.618708] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:52.201 [2024-11-21 03:19:39.618764] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.201 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:52.460 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:52.460 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:52.460 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:52.460 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:52.460 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:52.460 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:52.460 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:52.460 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:52.460 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.460 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.460 [2024-11-21 03:19:39.774466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:52.460 [2024-11-21 03:19:39.776774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:52.460 [2024-11-21 03:19:39.776887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:52.460 [2024-11-21 03:19:39.776956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:52.460 [2024-11-21 03:19:39.777059] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:52.460 [2024-11-21 03:19:39.777161] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:52.460 [2024-11-21 03:19:39.777237] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:52.460 [2024-11-21 03:19:39.777299] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:52.460 [2024-11-21 03:19:39.777353] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:52.460 [2024-11-21 03:19:39.777416] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:10:52.460 request: 00:10:52.460 { 00:10:52.460 "name": "raid_bdev1", 00:10:52.460 "raid_level": "raid0", 00:10:52.460 "base_bdevs": [ 00:10:52.460 "malloc1", 00:10:52.460 "malloc2", 00:10:52.460 "malloc3", 00:10:52.460 "malloc4" 00:10:52.460 ], 00:10:52.460 "strip_size_kb": 64, 00:10:52.460 "superblock": false, 00:10:52.460 "method": "bdev_raid_create", 00:10:52.460 "req_id": 1 00:10:52.460 } 00:10:52.460 Got JSON-RPC error response 00:10:52.460 response: 00:10:52.460 { 00:10:52.460 "code": -17, 00:10:52.460 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:52.460 } 00:10:52.460 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:52.460 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:52.460 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:52.460 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:52.460 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:52.460 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:52.460 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.460 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.460 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.460 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.460 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:52.460 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:52.460 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:52.460 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.460 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.460 [2024-11-21 03:19:39.830418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:52.460 [2024-11-21 03:19:39.830590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.460 [2024-11-21 03:19:39.830629] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:52.461 [2024-11-21 03:19:39.830665] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.461 [2024-11-21 03:19:39.833097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.461 [2024-11-21 03:19:39.833199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:52.461 [2024-11-21 03:19:39.833321] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:52.461 [2024-11-21 03:19:39.833405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:52.461 pt1 00:10:52.461 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.461 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:52.461 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:52.461 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.461 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:52.461 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.461 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.461 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.461 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.461 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.461 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.461 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.461 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.461 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.461 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.461 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.461 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.461 "name": "raid_bdev1", 00:10:52.461 "uuid": "92a85c30-d330-490e-9ca7-c4e96d51a4af", 00:10:52.461 "strip_size_kb": 64, 00:10:52.461 "state": "configuring", 00:10:52.461 "raid_level": "raid0", 00:10:52.461 "superblock": true, 00:10:52.461 "num_base_bdevs": 4, 00:10:52.461 "num_base_bdevs_discovered": 1, 00:10:52.461 "num_base_bdevs_operational": 4, 00:10:52.461 "base_bdevs_list": [ 00:10:52.461 { 00:10:52.461 "name": "pt1", 00:10:52.461 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:52.461 "is_configured": true, 00:10:52.461 "data_offset": 2048, 00:10:52.461 "data_size": 63488 00:10:52.461 }, 00:10:52.461 { 00:10:52.461 "name": null, 00:10:52.461 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:52.461 "is_configured": false, 00:10:52.461 "data_offset": 2048, 00:10:52.461 "data_size": 63488 00:10:52.461 }, 00:10:52.461 { 00:10:52.461 "name": null, 00:10:52.461 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:52.461 "is_configured": false, 00:10:52.461 "data_offset": 2048, 00:10:52.461 "data_size": 63488 00:10:52.461 }, 00:10:52.461 { 00:10:52.461 "name": null, 00:10:52.461 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:52.461 "is_configured": false, 00:10:52.461 "data_offset": 2048, 00:10:52.461 "data_size": 63488 00:10:52.461 } 00:10:52.461 ] 00:10:52.461 }' 00:10:52.461 03:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.461 03:19:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.029 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:53.029 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:53.029 03:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.029 03:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.029 [2024-11-21 03:19:40.326577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:53.029 [2024-11-21 03:19:40.326762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.029 [2024-11-21 03:19:40.326790] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:53.029 [2024-11-21 03:19:40.326803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.029 [2024-11-21 03:19:40.327289] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.029 [2024-11-21 03:19:40.327314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:53.029 [2024-11-21 03:19:40.327396] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:53.029 [2024-11-21 03:19:40.327421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:53.029 pt2 00:10:53.029 03:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.029 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:53.029 03:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.029 03:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.029 [2024-11-21 03:19:40.338562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:53.029 03:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.029 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:53.029 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:53.029 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.029 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.029 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.029 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.029 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.029 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.029 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.029 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.029 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.029 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.029 03:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.029 03:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.029 03:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.029 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.029 "name": "raid_bdev1", 00:10:53.029 "uuid": "92a85c30-d330-490e-9ca7-c4e96d51a4af", 00:10:53.029 "strip_size_kb": 64, 00:10:53.029 "state": "configuring", 00:10:53.029 "raid_level": "raid0", 00:10:53.029 "superblock": true, 00:10:53.029 "num_base_bdevs": 4, 00:10:53.029 "num_base_bdevs_discovered": 1, 00:10:53.029 "num_base_bdevs_operational": 4, 00:10:53.029 "base_bdevs_list": [ 00:10:53.029 { 00:10:53.029 "name": "pt1", 00:10:53.029 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:53.029 "is_configured": true, 00:10:53.029 "data_offset": 2048, 00:10:53.029 "data_size": 63488 00:10:53.029 }, 00:10:53.029 { 00:10:53.029 "name": null, 00:10:53.029 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:53.029 "is_configured": false, 00:10:53.029 "data_offset": 0, 00:10:53.029 "data_size": 63488 00:10:53.029 }, 00:10:53.029 { 00:10:53.029 "name": null, 00:10:53.029 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:53.029 "is_configured": false, 00:10:53.029 "data_offset": 2048, 00:10:53.029 "data_size": 63488 00:10:53.029 }, 00:10:53.029 { 00:10:53.029 "name": null, 00:10:53.029 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:53.029 "is_configured": false, 00:10:53.029 "data_offset": 2048, 00:10:53.029 "data_size": 63488 00:10:53.029 } 00:10:53.029 ] 00:10:53.029 }' 00:10:53.029 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.029 03:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.287 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:53.287 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:53.287 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:53.287 03:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.287 03:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.287 [2024-11-21 03:19:40.790669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:53.287 [2024-11-21 03:19:40.790834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.287 [2024-11-21 03:19:40.790887] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:53.287 [2024-11-21 03:19:40.790926] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.287 [2024-11-21 03:19:40.791419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.288 [2024-11-21 03:19:40.791487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:53.288 [2024-11-21 03:19:40.791602] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:53.288 [2024-11-21 03:19:40.791658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:53.288 pt2 00:10:53.288 03:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.288 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:53.288 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:53.288 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:53.288 03:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.288 03:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.288 [2024-11-21 03:19:40.802673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:53.288 [2024-11-21 03:19:40.802804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.288 [2024-11-21 03:19:40.802830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:53.288 [2024-11-21 03:19:40.802841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.288 [2024-11-21 03:19:40.803292] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.288 [2024-11-21 03:19:40.803312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:53.288 [2024-11-21 03:19:40.803389] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:53.288 [2024-11-21 03:19:40.803409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:53.288 pt3 00:10:53.288 03:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.288 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:53.288 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:53.288 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:53.288 03:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.288 03:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.288 [2024-11-21 03:19:40.814663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:53.288 [2024-11-21 03:19:40.814731] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.288 [2024-11-21 03:19:40.814754] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:53.288 [2024-11-21 03:19:40.814763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.288 [2024-11-21 03:19:40.815201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.288 [2024-11-21 03:19:40.815233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:53.288 [2024-11-21 03:19:40.815310] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:53.288 [2024-11-21 03:19:40.815330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:53.288 [2024-11-21 03:19:40.815444] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:53.288 [2024-11-21 03:19:40.815462] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:53.288 [2024-11-21 03:19:40.815717] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:53.288 [2024-11-21 03:19:40.815854] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:53.288 [2024-11-21 03:19:40.815867] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:10:53.288 [2024-11-21 03:19:40.815973] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.288 pt4 00:10:53.288 03:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.288 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:53.288 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:53.288 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:53.288 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:53.288 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.288 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.288 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.288 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.288 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.288 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.288 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.288 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.288 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.288 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.288 03:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.288 03:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.288 03:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.546 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.546 "name": "raid_bdev1", 00:10:53.546 "uuid": "92a85c30-d330-490e-9ca7-c4e96d51a4af", 00:10:53.546 "strip_size_kb": 64, 00:10:53.546 "state": "online", 00:10:53.546 "raid_level": "raid0", 00:10:53.546 "superblock": true, 00:10:53.546 "num_base_bdevs": 4, 00:10:53.546 "num_base_bdevs_discovered": 4, 00:10:53.546 "num_base_bdevs_operational": 4, 00:10:53.546 "base_bdevs_list": [ 00:10:53.546 { 00:10:53.546 "name": "pt1", 00:10:53.546 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:53.546 "is_configured": true, 00:10:53.546 "data_offset": 2048, 00:10:53.546 "data_size": 63488 00:10:53.546 }, 00:10:53.546 { 00:10:53.546 "name": "pt2", 00:10:53.546 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:53.546 "is_configured": true, 00:10:53.546 "data_offset": 2048, 00:10:53.546 "data_size": 63488 00:10:53.546 }, 00:10:53.546 { 00:10:53.546 "name": "pt3", 00:10:53.546 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:53.546 "is_configured": true, 00:10:53.546 "data_offset": 2048, 00:10:53.546 "data_size": 63488 00:10:53.546 }, 00:10:53.546 { 00:10:53.546 "name": "pt4", 00:10:53.546 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:53.546 "is_configured": true, 00:10:53.546 "data_offset": 2048, 00:10:53.546 "data_size": 63488 00:10:53.546 } 00:10:53.546 ] 00:10:53.546 }' 00:10:53.546 03:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.546 03:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.804 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:53.804 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:53.804 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:53.804 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:53.804 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:53.804 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:53.804 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:53.804 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:53.804 03:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.804 03:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.804 [2024-11-21 03:19:41.263183] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:53.804 03:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.804 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:53.804 "name": "raid_bdev1", 00:10:53.804 "aliases": [ 00:10:53.804 "92a85c30-d330-490e-9ca7-c4e96d51a4af" 00:10:53.804 ], 00:10:53.804 "product_name": "Raid Volume", 00:10:53.804 "block_size": 512, 00:10:53.804 "num_blocks": 253952, 00:10:53.804 "uuid": "92a85c30-d330-490e-9ca7-c4e96d51a4af", 00:10:53.804 "assigned_rate_limits": { 00:10:53.804 "rw_ios_per_sec": 0, 00:10:53.804 "rw_mbytes_per_sec": 0, 00:10:53.804 "r_mbytes_per_sec": 0, 00:10:53.804 "w_mbytes_per_sec": 0 00:10:53.804 }, 00:10:53.804 "claimed": false, 00:10:53.804 "zoned": false, 00:10:53.804 "supported_io_types": { 00:10:53.804 "read": true, 00:10:53.804 "write": true, 00:10:53.804 "unmap": true, 00:10:53.804 "flush": true, 00:10:53.804 "reset": true, 00:10:53.804 "nvme_admin": false, 00:10:53.804 "nvme_io": false, 00:10:53.804 "nvme_io_md": false, 00:10:53.804 "write_zeroes": true, 00:10:53.804 "zcopy": false, 00:10:53.804 "get_zone_info": false, 00:10:53.804 "zone_management": false, 00:10:53.804 "zone_append": false, 00:10:53.804 "compare": false, 00:10:53.804 "compare_and_write": false, 00:10:53.804 "abort": false, 00:10:53.804 "seek_hole": false, 00:10:53.804 "seek_data": false, 00:10:53.804 "copy": false, 00:10:53.804 "nvme_iov_md": false 00:10:53.804 }, 00:10:53.804 "memory_domains": [ 00:10:53.804 { 00:10:53.804 "dma_device_id": "system", 00:10:53.804 "dma_device_type": 1 00:10:53.804 }, 00:10:53.804 { 00:10:53.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.804 "dma_device_type": 2 00:10:53.804 }, 00:10:53.804 { 00:10:53.804 "dma_device_id": "system", 00:10:53.804 "dma_device_type": 1 00:10:53.804 }, 00:10:53.804 { 00:10:53.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.804 "dma_device_type": 2 00:10:53.804 }, 00:10:53.804 { 00:10:53.804 "dma_device_id": "system", 00:10:53.804 "dma_device_type": 1 00:10:53.804 }, 00:10:53.804 { 00:10:53.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.804 "dma_device_type": 2 00:10:53.804 }, 00:10:53.804 { 00:10:53.804 "dma_device_id": "system", 00:10:53.804 "dma_device_type": 1 00:10:53.804 }, 00:10:53.804 { 00:10:53.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.804 "dma_device_type": 2 00:10:53.804 } 00:10:53.804 ], 00:10:53.804 "driver_specific": { 00:10:53.804 "raid": { 00:10:53.804 "uuid": "92a85c30-d330-490e-9ca7-c4e96d51a4af", 00:10:53.804 "strip_size_kb": 64, 00:10:53.804 "state": "online", 00:10:53.804 "raid_level": "raid0", 00:10:53.804 "superblock": true, 00:10:53.804 "num_base_bdevs": 4, 00:10:53.804 "num_base_bdevs_discovered": 4, 00:10:53.805 "num_base_bdevs_operational": 4, 00:10:53.805 "base_bdevs_list": [ 00:10:53.805 { 00:10:53.805 "name": "pt1", 00:10:53.805 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:53.805 "is_configured": true, 00:10:53.805 "data_offset": 2048, 00:10:53.805 "data_size": 63488 00:10:53.805 }, 00:10:53.805 { 00:10:53.805 "name": "pt2", 00:10:53.805 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:53.805 "is_configured": true, 00:10:53.805 "data_offset": 2048, 00:10:53.805 "data_size": 63488 00:10:53.805 }, 00:10:53.805 { 00:10:53.805 "name": "pt3", 00:10:53.805 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:53.805 "is_configured": true, 00:10:53.805 "data_offset": 2048, 00:10:53.805 "data_size": 63488 00:10:53.805 }, 00:10:53.805 { 00:10:53.805 "name": "pt4", 00:10:53.805 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:53.805 "is_configured": true, 00:10:53.805 "data_offset": 2048, 00:10:53.805 "data_size": 63488 00:10:53.805 } 00:10:53.805 ] 00:10:53.805 } 00:10:53.805 } 00:10:53.805 }' 00:10:53.805 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:53.805 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:53.805 pt2 00:10:53.805 pt3 00:10:53.805 pt4' 00:10:53.805 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.805 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:53.805 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.805 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.805 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:53.805 03:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.805 03:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.063 03:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.063 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.064 [2024-11-21 03:19:41.527290] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 92a85c30-d330-490e-9ca7-c4e96d51a4af '!=' 92a85c30-d330-490e-9ca7-c4e96d51a4af ']' 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83697 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 83697 ']' 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 83697 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83697 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83697' 00:10:54.064 killing process with pid 83697 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 83697 00:10:54.064 [2024-11-21 03:19:41.603756] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:54.064 03:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 83697 00:10:54.064 [2024-11-21 03:19:41.603969] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:54.064 [2024-11-21 03:19:41.604082] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:54.064 [2024-11-21 03:19:41.604095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:10:54.323 [2024-11-21 03:19:41.651357] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:54.323 03:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:54.323 00:10:54.323 real 0m4.281s 00:10:54.323 user 0m6.693s 00:10:54.323 sys 0m1.060s 00:10:54.323 03:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.323 03:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.323 ************************************ 00:10:54.323 END TEST raid_superblock_test 00:10:54.323 ************************************ 00:10:54.582 03:19:41 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:54.582 03:19:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:54.582 03:19:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:54.582 03:19:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:54.582 ************************************ 00:10:54.582 START TEST raid_read_error_test 00:10:54.582 ************************************ 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6KfHyDFmQw 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83951 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83951 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 83951 ']' 00:10:54.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:54.582 03:19:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.582 [2024-11-21 03:19:42.063598] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:10:54.582 [2024-11-21 03:19:42.063738] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83951 ] 00:10:54.841 [2024-11-21 03:19:42.205343] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:54.841 [2024-11-21 03:19:42.226033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.841 [2024-11-21 03:19:42.256948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.841 [2024-11-21 03:19:42.300467] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.841 [2024-11-21 03:19:42.300585] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:55.410 03:19:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:55.410 03:19:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:55.410 03:19:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:55.410 03:19:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:55.411 03:19:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.411 03:19:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.411 BaseBdev1_malloc 00:10:55.411 03:19:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.411 03:19:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:55.411 03:19:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.411 03:19:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.411 true 00:10:55.411 03:19:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.411 03:19:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:55.411 03:19:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.411 03:19:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.411 [2024-11-21 03:19:42.940910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:55.411 [2024-11-21 03:19:42.941100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.411 [2024-11-21 03:19:42.941150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:55.411 [2024-11-21 03:19:42.941197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.411 [2024-11-21 03:19:42.943631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.411 [2024-11-21 03:19:42.943728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:55.411 BaseBdev1 00:10:55.411 03:19:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.411 03:19:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:55.411 03:19:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:55.411 03:19:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.411 03:19:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.411 BaseBdev2_malloc 00:10:55.411 03:19:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.411 03:19:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:55.411 03:19:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.411 03:19:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.670 true 00:10:55.670 03:19:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.670 03:19:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:55.670 03:19:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.670 03:19:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.670 [2024-11-21 03:19:42.981937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:55.670 [2024-11-21 03:19:42.982028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.670 [2024-11-21 03:19:42.982051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:55.670 [2024-11-21 03:19:42.982064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.670 [2024-11-21 03:19:42.984478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.670 [2024-11-21 03:19:42.984528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:55.670 BaseBdev2 00:10:55.670 03:19:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.670 03:19:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:55.670 03:19:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:55.670 03:19:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.670 03:19:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.670 BaseBdev3_malloc 00:10:55.670 03:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.670 03:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:55.670 03:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.670 03:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.670 true 00:10:55.670 03:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.670 03:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:55.670 03:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.670 03:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.670 [2024-11-21 03:19:43.023992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:55.670 [2024-11-21 03:19:43.024090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.670 [2024-11-21 03:19:43.024113] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:55.670 [2024-11-21 03:19:43.024127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.670 [2024-11-21 03:19:43.026464] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.670 [2024-11-21 03:19:43.026595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:55.670 BaseBdev3 00:10:55.670 03:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.670 03:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:55.670 03:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:55.670 03:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.670 03:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.670 BaseBdev4_malloc 00:10:55.670 03:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.670 03:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:55.670 03:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.670 03:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.670 true 00:10:55.670 03:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.670 03:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:55.670 03:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.670 03:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.670 [2024-11-21 03:19:43.072452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:55.670 [2024-11-21 03:19:43.072613] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.670 [2024-11-21 03:19:43.072653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:55.670 [2024-11-21 03:19:43.072686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.670 [2024-11-21 03:19:43.074999] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.670 [2024-11-21 03:19:43.075140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:55.670 BaseBdev4 00:10:55.670 03:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.670 03:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:55.670 03:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.670 03:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.670 [2024-11-21 03:19:43.084513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:55.670 [2024-11-21 03:19:43.086455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:55.670 [2024-11-21 03:19:43.086539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:55.670 [2024-11-21 03:19:43.086594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:55.670 [2024-11-21 03:19:43.086807] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:55.670 [2024-11-21 03:19:43.086821] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:55.670 [2024-11-21 03:19:43.087159] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006cb0 00:10:55.670 [2024-11-21 03:19:43.087322] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:55.670 [2024-11-21 03:19:43.087335] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:55.670 [2024-11-21 03:19:43.087489] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.670 03:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.670 03:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:55.670 03:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.670 03:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.670 03:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.670 03:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.670 03:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.671 03:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.671 03:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.671 03:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.671 03:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.671 03:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.671 03:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.671 03:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.671 03:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.671 03:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.671 03:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.671 "name": "raid_bdev1", 00:10:55.671 "uuid": "7c894875-a26a-4a59-a46e-1b475c20eeca", 00:10:55.671 "strip_size_kb": 64, 00:10:55.671 "state": "online", 00:10:55.671 "raid_level": "raid0", 00:10:55.671 "superblock": true, 00:10:55.671 "num_base_bdevs": 4, 00:10:55.671 "num_base_bdevs_discovered": 4, 00:10:55.671 "num_base_bdevs_operational": 4, 00:10:55.671 "base_bdevs_list": [ 00:10:55.671 { 00:10:55.671 "name": "BaseBdev1", 00:10:55.671 "uuid": "1c2bbb5c-db00-5a5b-ab7d-240bfb1ec657", 00:10:55.671 "is_configured": true, 00:10:55.671 "data_offset": 2048, 00:10:55.671 "data_size": 63488 00:10:55.671 }, 00:10:55.671 { 00:10:55.671 "name": "BaseBdev2", 00:10:55.671 "uuid": "8e05366b-fc27-547c-88f5-4476c9dcf371", 00:10:55.671 "is_configured": true, 00:10:55.671 "data_offset": 2048, 00:10:55.671 "data_size": 63488 00:10:55.671 }, 00:10:55.671 { 00:10:55.671 "name": "BaseBdev3", 00:10:55.671 "uuid": "d9599633-365c-520e-b7af-9bf3ee3f5095", 00:10:55.671 "is_configured": true, 00:10:55.671 "data_offset": 2048, 00:10:55.671 "data_size": 63488 00:10:55.671 }, 00:10:55.671 { 00:10:55.671 "name": "BaseBdev4", 00:10:55.671 "uuid": "202b74d9-634b-5d6e-a1ef-6552e3f5fc3c", 00:10:55.671 "is_configured": true, 00:10:55.671 "data_offset": 2048, 00:10:55.671 "data_size": 63488 00:10:55.671 } 00:10:55.671 ] 00:10:55.671 }' 00:10:55.671 03:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.671 03:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.237 03:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:56.237 03:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:56.237 [2024-11-21 03:19:43.637073] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006e50 00:10:57.210 03:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:57.210 03:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.210 03:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.210 03:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.210 03:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:57.210 03:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:57.210 03:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:57.210 03:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:57.210 03:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:57.210 03:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:57.210 03:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:57.210 03:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.210 03:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.210 03:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.210 03:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.210 03:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.210 03:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.210 03:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.210 03:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.210 03:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.210 03:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.210 03:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.210 03:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.210 "name": "raid_bdev1", 00:10:57.210 "uuid": "7c894875-a26a-4a59-a46e-1b475c20eeca", 00:10:57.210 "strip_size_kb": 64, 00:10:57.210 "state": "online", 00:10:57.210 "raid_level": "raid0", 00:10:57.210 "superblock": true, 00:10:57.210 "num_base_bdevs": 4, 00:10:57.210 "num_base_bdevs_discovered": 4, 00:10:57.210 "num_base_bdevs_operational": 4, 00:10:57.210 "base_bdevs_list": [ 00:10:57.210 { 00:10:57.210 "name": "BaseBdev1", 00:10:57.210 "uuid": "1c2bbb5c-db00-5a5b-ab7d-240bfb1ec657", 00:10:57.210 "is_configured": true, 00:10:57.210 "data_offset": 2048, 00:10:57.210 "data_size": 63488 00:10:57.210 }, 00:10:57.210 { 00:10:57.210 "name": "BaseBdev2", 00:10:57.210 "uuid": "8e05366b-fc27-547c-88f5-4476c9dcf371", 00:10:57.210 "is_configured": true, 00:10:57.210 "data_offset": 2048, 00:10:57.210 "data_size": 63488 00:10:57.210 }, 00:10:57.210 { 00:10:57.210 "name": "BaseBdev3", 00:10:57.210 "uuid": "d9599633-365c-520e-b7af-9bf3ee3f5095", 00:10:57.210 "is_configured": true, 00:10:57.210 "data_offset": 2048, 00:10:57.210 "data_size": 63488 00:10:57.210 }, 00:10:57.210 { 00:10:57.210 "name": "BaseBdev4", 00:10:57.210 "uuid": "202b74d9-634b-5d6e-a1ef-6552e3f5fc3c", 00:10:57.210 "is_configured": true, 00:10:57.210 "data_offset": 2048, 00:10:57.210 "data_size": 63488 00:10:57.210 } 00:10:57.210 ] 00:10:57.210 }' 00:10:57.210 03:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.210 03:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.776 03:19:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:57.776 03:19:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.776 03:19:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.776 [2024-11-21 03:19:45.061139] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:57.776 [2024-11-21 03:19:45.061288] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:57.776 [2024-11-21 03:19:45.064189] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:57.776 [2024-11-21 03:19:45.064302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.776 [2024-11-21 03:19:45.064397] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:57.776 [2024-11-21 03:19:45.064480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:57.776 03:19:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.776 { 00:10:57.776 "results": [ 00:10:57.776 { 00:10:57.776 "job": "raid_bdev1", 00:10:57.776 "core_mask": "0x1", 00:10:57.776 "workload": "randrw", 00:10:57.776 "percentage": 50, 00:10:57.776 "status": "finished", 00:10:57.776 "queue_depth": 1, 00:10:57.776 "io_size": 131072, 00:10:57.776 "runtime": 1.421974, 00:10:57.776 "iops": 14582.5451098262, 00:10:57.776 "mibps": 1822.818138728275, 00:10:57.776 "io_failed": 1, 00:10:57.776 "io_timeout": 0, 00:10:57.776 "avg_latency_us": 95.12194973712549, 00:10:57.776 "min_latency_us": 28.11470408785845, 00:10:57.776 "max_latency_us": 1713.6581539266103 00:10:57.776 } 00:10:57.776 ], 00:10:57.776 "core_count": 1 00:10:57.776 } 00:10:57.776 03:19:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83951 00:10:57.777 03:19:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 83951 ']' 00:10:57.777 03:19:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 83951 00:10:57.777 03:19:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:57.777 03:19:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:57.777 03:19:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83951 00:10:57.777 03:19:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:57.777 03:19:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:57.777 03:19:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83951' 00:10:57.777 killing process with pid 83951 00:10:57.777 03:19:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 83951 00:10:57.777 [2024-11-21 03:19:45.112358] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:57.777 03:19:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 83951 00:10:57.777 [2024-11-21 03:19:45.150170] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:58.035 03:19:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6KfHyDFmQw 00:10:58.035 03:19:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:58.035 03:19:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:58.036 03:19:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:10:58.036 03:19:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:58.036 03:19:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:58.036 03:19:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:58.036 03:19:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:10:58.036 ************************************ 00:10:58.036 END TEST raid_read_error_test 00:10:58.036 ************************************ 00:10:58.036 00:10:58.036 real 0m3.437s 00:10:58.036 user 0m4.339s 00:10:58.036 sys 0m0.601s 00:10:58.036 03:19:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.036 03:19:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.036 03:19:45 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:58.036 03:19:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:58.036 03:19:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:58.036 03:19:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:58.036 ************************************ 00:10:58.036 START TEST raid_write_error_test 00:10:58.036 ************************************ 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.IjvswJOl6t 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=84085 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 84085 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 84085 ']' 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:58.036 03:19:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.036 [2024-11-21 03:19:45.564849] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:10:58.036 [2024-11-21 03:19:45.565095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84085 ] 00:10:58.295 [2024-11-21 03:19:45.705739] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:58.295 [2024-11-21 03:19:45.742990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.295 [2024-11-21 03:19:45.773420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.295 [2024-11-21 03:19:45.816783] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:58.295 [2024-11-21 03:19:45.816923] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.234 BaseBdev1_malloc 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.234 true 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.234 [2024-11-21 03:19:46.473644] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:59.234 [2024-11-21 03:19:46.473729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.234 [2024-11-21 03:19:46.473752] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:59.234 [2024-11-21 03:19:46.473776] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.234 [2024-11-21 03:19:46.476192] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.234 [2024-11-21 03:19:46.476253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:59.234 BaseBdev1 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.234 BaseBdev2_malloc 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.234 true 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.234 [2024-11-21 03:19:46.514934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:59.234 [2024-11-21 03:19:46.515025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.234 [2024-11-21 03:19:46.515045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:59.234 [2024-11-21 03:19:46.515057] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.234 [2024-11-21 03:19:46.517399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.234 [2024-11-21 03:19:46.517449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:59.234 BaseBdev2 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.234 BaseBdev3_malloc 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.234 true 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.234 [2024-11-21 03:19:46.556122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:59.234 [2024-11-21 03:19:46.556204] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.234 [2024-11-21 03:19:46.556223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:59.234 [2024-11-21 03:19:46.556235] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.234 [2024-11-21 03:19:46.558531] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.234 [2024-11-21 03:19:46.558669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:59.234 BaseBdev3 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.234 BaseBdev4_malloc 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.234 true 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.234 [2024-11-21 03:19:46.606633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:59.234 [2024-11-21 03:19:46.606806] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.234 [2024-11-21 03:19:46.606831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:59.234 [2024-11-21 03:19:46.606842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.234 [2024-11-21 03:19:46.609263] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.234 [2024-11-21 03:19:46.609317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:59.234 BaseBdev4 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.234 [2024-11-21 03:19:46.618665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:59.234 [2024-11-21 03:19:46.620625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:59.234 [2024-11-21 03:19:46.620708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:59.234 [2024-11-21 03:19:46.620765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:59.234 [2024-11-21 03:19:46.620977] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:59.234 [2024-11-21 03:19:46.620991] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:59.234 [2024-11-21 03:19:46.621302] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006cb0 00:10:59.234 [2024-11-21 03:19:46.621457] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:59.234 [2024-11-21 03:19:46.621474] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:59.234 [2024-11-21 03:19:46.621629] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.234 03:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.235 03:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.235 03:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.235 03:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.235 03:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.235 03:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.235 03:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.235 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.235 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.235 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.235 03:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.235 "name": "raid_bdev1", 00:10:59.235 "uuid": "1acc73bb-df76-4832-823b-9551ac862926", 00:10:59.235 "strip_size_kb": 64, 00:10:59.235 "state": "online", 00:10:59.235 "raid_level": "raid0", 00:10:59.235 "superblock": true, 00:10:59.235 "num_base_bdevs": 4, 00:10:59.235 "num_base_bdevs_discovered": 4, 00:10:59.235 "num_base_bdevs_operational": 4, 00:10:59.235 "base_bdevs_list": [ 00:10:59.235 { 00:10:59.235 "name": "BaseBdev1", 00:10:59.235 "uuid": "19ff33f7-459d-52e0-934b-0793d0aa924c", 00:10:59.235 "is_configured": true, 00:10:59.235 "data_offset": 2048, 00:10:59.235 "data_size": 63488 00:10:59.235 }, 00:10:59.235 { 00:10:59.235 "name": "BaseBdev2", 00:10:59.235 "uuid": "946ea691-f003-5b0b-b293-02ca985a6c45", 00:10:59.235 "is_configured": true, 00:10:59.235 "data_offset": 2048, 00:10:59.235 "data_size": 63488 00:10:59.235 }, 00:10:59.235 { 00:10:59.235 "name": "BaseBdev3", 00:10:59.235 "uuid": "1d506683-fa96-578c-8185-4f7d95c15301", 00:10:59.235 "is_configured": true, 00:10:59.235 "data_offset": 2048, 00:10:59.235 "data_size": 63488 00:10:59.235 }, 00:10:59.235 { 00:10:59.235 "name": "BaseBdev4", 00:10:59.235 "uuid": "65bce64f-f9f2-5dbe-8241-0b8dcd72683b", 00:10:59.235 "is_configured": true, 00:10:59.235 "data_offset": 2048, 00:10:59.235 "data_size": 63488 00:10:59.235 } 00:10:59.235 ] 00:10:59.235 }' 00:10:59.235 03:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.235 03:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.805 03:19:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:59.805 03:19:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:59.805 [2024-11-21 03:19:47.187238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006e50 00:11:00.758 03:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:00.758 03:19:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.758 03:19:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.758 03:19:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.758 03:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:00.758 03:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:00.758 03:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:00.758 03:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:00.758 03:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.758 03:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.758 03:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.758 03:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.758 03:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.758 03:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.758 03:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.758 03:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.758 03:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.758 03:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.758 03:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.758 03:19:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.758 03:19:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.758 03:19:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.758 03:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.758 "name": "raid_bdev1", 00:11:00.758 "uuid": "1acc73bb-df76-4832-823b-9551ac862926", 00:11:00.758 "strip_size_kb": 64, 00:11:00.758 "state": "online", 00:11:00.758 "raid_level": "raid0", 00:11:00.758 "superblock": true, 00:11:00.758 "num_base_bdevs": 4, 00:11:00.758 "num_base_bdevs_discovered": 4, 00:11:00.758 "num_base_bdevs_operational": 4, 00:11:00.758 "base_bdevs_list": [ 00:11:00.758 { 00:11:00.758 "name": "BaseBdev1", 00:11:00.758 "uuid": "19ff33f7-459d-52e0-934b-0793d0aa924c", 00:11:00.758 "is_configured": true, 00:11:00.758 "data_offset": 2048, 00:11:00.758 "data_size": 63488 00:11:00.758 }, 00:11:00.758 { 00:11:00.758 "name": "BaseBdev2", 00:11:00.758 "uuid": "946ea691-f003-5b0b-b293-02ca985a6c45", 00:11:00.758 "is_configured": true, 00:11:00.759 "data_offset": 2048, 00:11:00.759 "data_size": 63488 00:11:00.759 }, 00:11:00.759 { 00:11:00.759 "name": "BaseBdev3", 00:11:00.759 "uuid": "1d506683-fa96-578c-8185-4f7d95c15301", 00:11:00.759 "is_configured": true, 00:11:00.759 "data_offset": 2048, 00:11:00.759 "data_size": 63488 00:11:00.759 }, 00:11:00.759 { 00:11:00.759 "name": "BaseBdev4", 00:11:00.759 "uuid": "65bce64f-f9f2-5dbe-8241-0b8dcd72683b", 00:11:00.759 "is_configured": true, 00:11:00.759 "data_offset": 2048, 00:11:00.759 "data_size": 63488 00:11:00.759 } 00:11:00.759 ] 00:11:00.759 }' 00:11:00.759 03:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.759 03:19:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.019 03:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:01.019 03:19:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.019 03:19:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.019 [2024-11-21 03:19:48.541987] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:01.019 [2024-11-21 03:19:48.542049] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:01.019 [2024-11-21 03:19:48.544795] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:01.019 [2024-11-21 03:19:48.544858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.019 [2024-11-21 03:19:48.544903] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:01.019 [2024-11-21 03:19:48.544916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:01.019 { 00:11:01.019 "results": [ 00:11:01.019 { 00:11:01.019 "job": "raid_bdev1", 00:11:01.019 "core_mask": "0x1", 00:11:01.019 "workload": "randrw", 00:11:01.019 "percentage": 50, 00:11:01.019 "status": "finished", 00:11:01.019 "queue_depth": 1, 00:11:01.019 "io_size": 131072, 00:11:01.019 "runtime": 1.352588, 00:11:01.019 "iops": 15601.942350516196, 00:11:01.019 "mibps": 1950.2427938145245, 00:11:01.019 "io_failed": 1, 00:11:01.019 "io_timeout": 0, 00:11:01.019 "avg_latency_us": 88.94960228012012, 00:11:01.019 "min_latency_us": 26.887474941166218, 00:11:01.019 "max_latency_us": 1470.889915453674 00:11:01.019 } 00:11:01.019 ], 00:11:01.019 "core_count": 1 00:11:01.019 } 00:11:01.019 03:19:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.019 03:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 84085 00:11:01.019 03:19:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 84085 ']' 00:11:01.019 03:19:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 84085 00:11:01.019 03:19:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:01.019 03:19:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.019 03:19:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84085 00:11:01.279 03:19:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:01.279 killing process with pid 84085 00:11:01.279 03:19:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:01.279 03:19:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84085' 00:11:01.279 03:19:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 84085 00:11:01.279 [2024-11-21 03:19:48.590623] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:01.279 03:19:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 84085 00:11:01.279 [2024-11-21 03:19:48.628000] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:01.539 03:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:01.539 03:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.IjvswJOl6t 00:11:01.539 03:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:01.539 03:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:11:01.539 03:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:01.539 ************************************ 00:11:01.539 END TEST raid_write_error_test 00:11:01.539 ************************************ 00:11:01.539 03:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:01.539 03:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:01.539 03:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:11:01.539 00:11:01.539 real 0m3.394s 00:11:01.539 user 0m4.287s 00:11:01.539 sys 0m0.593s 00:11:01.539 03:19:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.539 03:19:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.539 03:19:48 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:01.539 03:19:48 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:11:01.539 03:19:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:01.540 03:19:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.540 03:19:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:01.540 ************************************ 00:11:01.540 START TEST raid_state_function_test 00:11:01.540 ************************************ 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:01.540 Process raid pid: 84212 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=84212 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84212' 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 84212 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 84212 ']' 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.540 03:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.540 [2024-11-21 03:19:49.020969] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:11:01.540 [2024-11-21 03:19:49.021105] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.799 [2024-11-21 03:19:49.160045] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:01.799 [2024-11-21 03:19:49.199342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.799 [2024-11-21 03:19:49.229543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.800 [2024-11-21 03:19:49.272628] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.800 [2024-11-21 03:19:49.272667] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:02.368 03:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.368 03:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:02.368 03:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:02.368 03:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.368 03:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.368 [2024-11-21 03:19:49.911681] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:02.368 [2024-11-21 03:19:49.911855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:02.368 [2024-11-21 03:19:49.911879] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:02.368 [2024-11-21 03:19:49.911890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:02.368 [2024-11-21 03:19:49.911902] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:02.368 [2024-11-21 03:19:49.911910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:02.368 [2024-11-21 03:19:49.911919] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:02.368 [2024-11-21 03:19:49.911927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:02.368 03:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.368 03:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:02.368 03:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.368 03:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.368 03:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:02.368 03:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.369 03:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.369 03:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.369 03:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.369 03:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.369 03:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.369 03:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.369 03:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.369 03:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.369 03:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.628 03:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.629 03:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.629 "name": "Existed_Raid", 00:11:02.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.629 "strip_size_kb": 64, 00:11:02.629 "state": "configuring", 00:11:02.629 "raid_level": "concat", 00:11:02.629 "superblock": false, 00:11:02.629 "num_base_bdevs": 4, 00:11:02.629 "num_base_bdevs_discovered": 0, 00:11:02.629 "num_base_bdevs_operational": 4, 00:11:02.629 "base_bdevs_list": [ 00:11:02.629 { 00:11:02.629 "name": "BaseBdev1", 00:11:02.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.629 "is_configured": false, 00:11:02.629 "data_offset": 0, 00:11:02.629 "data_size": 0 00:11:02.629 }, 00:11:02.629 { 00:11:02.629 "name": "BaseBdev2", 00:11:02.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.629 "is_configured": false, 00:11:02.629 "data_offset": 0, 00:11:02.629 "data_size": 0 00:11:02.629 }, 00:11:02.629 { 00:11:02.629 "name": "BaseBdev3", 00:11:02.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.629 "is_configured": false, 00:11:02.629 "data_offset": 0, 00:11:02.629 "data_size": 0 00:11:02.629 }, 00:11:02.629 { 00:11:02.629 "name": "BaseBdev4", 00:11:02.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.629 "is_configured": false, 00:11:02.629 "data_offset": 0, 00:11:02.629 "data_size": 0 00:11:02.629 } 00:11:02.629 ] 00:11:02.629 }' 00:11:02.629 03:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.629 03:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.889 03:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:02.889 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.889 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.889 [2024-11-21 03:19:50.399690] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:02.889 [2024-11-21 03:19:50.399829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:11:02.889 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.889 03:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:02.889 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.889 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.889 [2024-11-21 03:19:50.411716] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:02.889 [2024-11-21 03:19:50.411842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:02.889 [2024-11-21 03:19:50.411877] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:02.889 [2024-11-21 03:19:50.411902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:02.889 [2024-11-21 03:19:50.411925] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:02.889 [2024-11-21 03:19:50.411949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:02.889 [2024-11-21 03:19:50.411971] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:02.889 [2024-11-21 03:19:50.412003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:02.889 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.889 03:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:02.889 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.889 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.889 [2024-11-21 03:19:50.432885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:02.889 BaseBdev1 00:11:02.889 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.889 03:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:02.889 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:02.889 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:02.889 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:02.889 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:02.889 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:02.889 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:02.889 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.889 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.889 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.889 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:02.889 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.889 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.149 [ 00:11:03.149 { 00:11:03.149 "name": "BaseBdev1", 00:11:03.149 "aliases": [ 00:11:03.149 "a0316ef4-0316-4e73-bf9a-ad7a5dc86476" 00:11:03.149 ], 00:11:03.149 "product_name": "Malloc disk", 00:11:03.149 "block_size": 512, 00:11:03.149 "num_blocks": 65536, 00:11:03.149 "uuid": "a0316ef4-0316-4e73-bf9a-ad7a5dc86476", 00:11:03.149 "assigned_rate_limits": { 00:11:03.149 "rw_ios_per_sec": 0, 00:11:03.149 "rw_mbytes_per_sec": 0, 00:11:03.149 "r_mbytes_per_sec": 0, 00:11:03.149 "w_mbytes_per_sec": 0 00:11:03.149 }, 00:11:03.149 "claimed": true, 00:11:03.149 "claim_type": "exclusive_write", 00:11:03.149 "zoned": false, 00:11:03.149 "supported_io_types": { 00:11:03.150 "read": true, 00:11:03.150 "write": true, 00:11:03.150 "unmap": true, 00:11:03.150 "flush": true, 00:11:03.150 "reset": true, 00:11:03.150 "nvme_admin": false, 00:11:03.150 "nvme_io": false, 00:11:03.150 "nvme_io_md": false, 00:11:03.150 "write_zeroes": true, 00:11:03.150 "zcopy": true, 00:11:03.150 "get_zone_info": false, 00:11:03.150 "zone_management": false, 00:11:03.150 "zone_append": false, 00:11:03.150 "compare": false, 00:11:03.150 "compare_and_write": false, 00:11:03.150 "abort": true, 00:11:03.150 "seek_hole": false, 00:11:03.150 "seek_data": false, 00:11:03.150 "copy": true, 00:11:03.150 "nvme_iov_md": false 00:11:03.150 }, 00:11:03.150 "memory_domains": [ 00:11:03.150 { 00:11:03.150 "dma_device_id": "system", 00:11:03.150 "dma_device_type": 1 00:11:03.150 }, 00:11:03.150 { 00:11:03.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.150 "dma_device_type": 2 00:11:03.150 } 00:11:03.150 ], 00:11:03.150 "driver_specific": {} 00:11:03.150 } 00:11:03.150 ] 00:11:03.150 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.150 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:03.150 03:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:03.150 03:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.150 03:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.150 03:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:03.150 03:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.150 03:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.150 03:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.150 03:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.150 03:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.150 03:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.150 03:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.150 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.150 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.150 03:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.150 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.150 03:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.150 "name": "Existed_Raid", 00:11:03.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.150 "strip_size_kb": 64, 00:11:03.150 "state": "configuring", 00:11:03.150 "raid_level": "concat", 00:11:03.150 "superblock": false, 00:11:03.150 "num_base_bdevs": 4, 00:11:03.150 "num_base_bdevs_discovered": 1, 00:11:03.150 "num_base_bdevs_operational": 4, 00:11:03.150 "base_bdevs_list": [ 00:11:03.150 { 00:11:03.150 "name": "BaseBdev1", 00:11:03.150 "uuid": "a0316ef4-0316-4e73-bf9a-ad7a5dc86476", 00:11:03.150 "is_configured": true, 00:11:03.150 "data_offset": 0, 00:11:03.150 "data_size": 65536 00:11:03.150 }, 00:11:03.150 { 00:11:03.150 "name": "BaseBdev2", 00:11:03.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.150 "is_configured": false, 00:11:03.150 "data_offset": 0, 00:11:03.150 "data_size": 0 00:11:03.150 }, 00:11:03.150 { 00:11:03.150 "name": "BaseBdev3", 00:11:03.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.150 "is_configured": false, 00:11:03.150 "data_offset": 0, 00:11:03.150 "data_size": 0 00:11:03.150 }, 00:11:03.150 { 00:11:03.150 "name": "BaseBdev4", 00:11:03.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.150 "is_configured": false, 00:11:03.150 "data_offset": 0, 00:11:03.150 "data_size": 0 00:11:03.150 } 00:11:03.150 ] 00:11:03.150 }' 00:11:03.150 03:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.150 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.410 03:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:03.410 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.410 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.410 [2024-11-21 03:19:50.925101] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:03.410 [2024-11-21 03:19:50.925291] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:03.410 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.410 03:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:03.410 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.410 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.410 [2024-11-21 03:19:50.937156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:03.410 [2024-11-21 03:19:50.939294] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:03.410 [2024-11-21 03:19:50.939385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:03.410 [2024-11-21 03:19:50.939401] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:03.410 [2024-11-21 03:19:50.939409] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:03.410 [2024-11-21 03:19:50.939417] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:03.410 [2024-11-21 03:19:50.939425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:03.411 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.411 03:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:03.411 03:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:03.411 03:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:03.411 03:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.411 03:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.411 03:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:03.411 03:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.411 03:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.411 03:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.411 03:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.411 03:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.411 03:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.411 03:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.411 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.411 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.411 03:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.411 03:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.670 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.670 "name": "Existed_Raid", 00:11:03.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.670 "strip_size_kb": 64, 00:11:03.670 "state": "configuring", 00:11:03.670 "raid_level": "concat", 00:11:03.670 "superblock": false, 00:11:03.670 "num_base_bdevs": 4, 00:11:03.670 "num_base_bdevs_discovered": 1, 00:11:03.670 "num_base_bdevs_operational": 4, 00:11:03.670 "base_bdevs_list": [ 00:11:03.670 { 00:11:03.670 "name": "BaseBdev1", 00:11:03.670 "uuid": "a0316ef4-0316-4e73-bf9a-ad7a5dc86476", 00:11:03.670 "is_configured": true, 00:11:03.670 "data_offset": 0, 00:11:03.670 "data_size": 65536 00:11:03.670 }, 00:11:03.670 { 00:11:03.670 "name": "BaseBdev2", 00:11:03.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.670 "is_configured": false, 00:11:03.670 "data_offset": 0, 00:11:03.670 "data_size": 0 00:11:03.670 }, 00:11:03.670 { 00:11:03.670 "name": "BaseBdev3", 00:11:03.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.670 "is_configured": false, 00:11:03.670 "data_offset": 0, 00:11:03.670 "data_size": 0 00:11:03.670 }, 00:11:03.670 { 00:11:03.670 "name": "BaseBdev4", 00:11:03.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.670 "is_configured": false, 00:11:03.670 "data_offset": 0, 00:11:03.670 "data_size": 0 00:11:03.670 } 00:11:03.670 ] 00:11:03.670 }' 00:11:03.670 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.670 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.931 [2024-11-21 03:19:51.432373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:03.931 BaseBdev2 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.931 [ 00:11:03.931 { 00:11:03.931 "name": "BaseBdev2", 00:11:03.931 "aliases": [ 00:11:03.931 "9b95bf34-0eba-4408-b083-446790b4365d" 00:11:03.931 ], 00:11:03.931 "product_name": "Malloc disk", 00:11:03.931 "block_size": 512, 00:11:03.931 "num_blocks": 65536, 00:11:03.931 "uuid": "9b95bf34-0eba-4408-b083-446790b4365d", 00:11:03.931 "assigned_rate_limits": { 00:11:03.931 "rw_ios_per_sec": 0, 00:11:03.931 "rw_mbytes_per_sec": 0, 00:11:03.931 "r_mbytes_per_sec": 0, 00:11:03.931 "w_mbytes_per_sec": 0 00:11:03.931 }, 00:11:03.931 "claimed": true, 00:11:03.931 "claim_type": "exclusive_write", 00:11:03.931 "zoned": false, 00:11:03.931 "supported_io_types": { 00:11:03.931 "read": true, 00:11:03.931 "write": true, 00:11:03.931 "unmap": true, 00:11:03.931 "flush": true, 00:11:03.931 "reset": true, 00:11:03.931 "nvme_admin": false, 00:11:03.931 "nvme_io": false, 00:11:03.931 "nvme_io_md": false, 00:11:03.931 "write_zeroes": true, 00:11:03.931 "zcopy": true, 00:11:03.931 "get_zone_info": false, 00:11:03.931 "zone_management": false, 00:11:03.931 "zone_append": false, 00:11:03.931 "compare": false, 00:11:03.931 "compare_and_write": false, 00:11:03.931 "abort": true, 00:11:03.931 "seek_hole": false, 00:11:03.931 "seek_data": false, 00:11:03.931 "copy": true, 00:11:03.931 "nvme_iov_md": false 00:11:03.931 }, 00:11:03.931 "memory_domains": [ 00:11:03.931 { 00:11:03.931 "dma_device_id": "system", 00:11:03.931 "dma_device_type": 1 00:11:03.931 }, 00:11:03.931 { 00:11:03.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.931 "dma_device_type": 2 00:11:03.931 } 00:11:03.931 ], 00:11:03.931 "driver_specific": {} 00:11:03.931 } 00:11:03.931 ] 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.931 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.191 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.191 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.191 "name": "Existed_Raid", 00:11:04.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.191 "strip_size_kb": 64, 00:11:04.191 "state": "configuring", 00:11:04.191 "raid_level": "concat", 00:11:04.191 "superblock": false, 00:11:04.191 "num_base_bdevs": 4, 00:11:04.191 "num_base_bdevs_discovered": 2, 00:11:04.191 "num_base_bdevs_operational": 4, 00:11:04.191 "base_bdevs_list": [ 00:11:04.191 { 00:11:04.191 "name": "BaseBdev1", 00:11:04.191 "uuid": "a0316ef4-0316-4e73-bf9a-ad7a5dc86476", 00:11:04.191 "is_configured": true, 00:11:04.191 "data_offset": 0, 00:11:04.191 "data_size": 65536 00:11:04.191 }, 00:11:04.191 { 00:11:04.191 "name": "BaseBdev2", 00:11:04.191 "uuid": "9b95bf34-0eba-4408-b083-446790b4365d", 00:11:04.191 "is_configured": true, 00:11:04.191 "data_offset": 0, 00:11:04.191 "data_size": 65536 00:11:04.191 }, 00:11:04.191 { 00:11:04.191 "name": "BaseBdev3", 00:11:04.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.191 "is_configured": false, 00:11:04.191 "data_offset": 0, 00:11:04.191 "data_size": 0 00:11:04.191 }, 00:11:04.191 { 00:11:04.191 "name": "BaseBdev4", 00:11:04.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.191 "is_configured": false, 00:11:04.191 "data_offset": 0, 00:11:04.191 "data_size": 0 00:11:04.191 } 00:11:04.191 ] 00:11:04.191 }' 00:11:04.191 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.191 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.451 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:04.451 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.451 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.451 [2024-11-21 03:19:51.932473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:04.451 BaseBdev3 00:11:04.451 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.451 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:04.451 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:04.451 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:04.451 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:04.451 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:04.451 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:04.451 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:04.451 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.451 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.451 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.451 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:04.451 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.451 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.451 [ 00:11:04.451 { 00:11:04.451 "name": "BaseBdev3", 00:11:04.451 "aliases": [ 00:11:04.451 "bc6c7107-7a22-4978-9a23-c57a6a92122d" 00:11:04.451 ], 00:11:04.451 "product_name": "Malloc disk", 00:11:04.451 "block_size": 512, 00:11:04.451 "num_blocks": 65536, 00:11:04.451 "uuid": "bc6c7107-7a22-4978-9a23-c57a6a92122d", 00:11:04.451 "assigned_rate_limits": { 00:11:04.451 "rw_ios_per_sec": 0, 00:11:04.451 "rw_mbytes_per_sec": 0, 00:11:04.451 "r_mbytes_per_sec": 0, 00:11:04.451 "w_mbytes_per_sec": 0 00:11:04.451 }, 00:11:04.451 "claimed": true, 00:11:04.451 "claim_type": "exclusive_write", 00:11:04.451 "zoned": false, 00:11:04.451 "supported_io_types": { 00:11:04.451 "read": true, 00:11:04.451 "write": true, 00:11:04.451 "unmap": true, 00:11:04.451 "flush": true, 00:11:04.451 "reset": true, 00:11:04.451 "nvme_admin": false, 00:11:04.451 "nvme_io": false, 00:11:04.451 "nvme_io_md": false, 00:11:04.451 "write_zeroes": true, 00:11:04.451 "zcopy": true, 00:11:04.451 "get_zone_info": false, 00:11:04.451 "zone_management": false, 00:11:04.451 "zone_append": false, 00:11:04.451 "compare": false, 00:11:04.451 "compare_and_write": false, 00:11:04.451 "abort": true, 00:11:04.451 "seek_hole": false, 00:11:04.451 "seek_data": false, 00:11:04.451 "copy": true, 00:11:04.451 "nvme_iov_md": false 00:11:04.451 }, 00:11:04.451 "memory_domains": [ 00:11:04.451 { 00:11:04.451 "dma_device_id": "system", 00:11:04.451 "dma_device_type": 1 00:11:04.451 }, 00:11:04.451 { 00:11:04.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.451 "dma_device_type": 2 00:11:04.451 } 00:11:04.451 ], 00:11:04.451 "driver_specific": {} 00:11:04.451 } 00:11:04.451 ] 00:11:04.451 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.451 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:04.451 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:04.451 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:04.451 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:04.451 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.451 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.451 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.451 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.451 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.451 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.451 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.452 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.452 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.452 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.452 03:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.452 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.452 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.452 03:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.711 03:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.711 "name": "Existed_Raid", 00:11:04.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.711 "strip_size_kb": 64, 00:11:04.711 "state": "configuring", 00:11:04.711 "raid_level": "concat", 00:11:04.711 "superblock": false, 00:11:04.711 "num_base_bdevs": 4, 00:11:04.711 "num_base_bdevs_discovered": 3, 00:11:04.711 "num_base_bdevs_operational": 4, 00:11:04.711 "base_bdevs_list": [ 00:11:04.711 { 00:11:04.711 "name": "BaseBdev1", 00:11:04.711 "uuid": "a0316ef4-0316-4e73-bf9a-ad7a5dc86476", 00:11:04.711 "is_configured": true, 00:11:04.711 "data_offset": 0, 00:11:04.711 "data_size": 65536 00:11:04.711 }, 00:11:04.711 { 00:11:04.711 "name": "BaseBdev2", 00:11:04.711 "uuid": "9b95bf34-0eba-4408-b083-446790b4365d", 00:11:04.711 "is_configured": true, 00:11:04.711 "data_offset": 0, 00:11:04.712 "data_size": 65536 00:11:04.712 }, 00:11:04.712 { 00:11:04.712 "name": "BaseBdev3", 00:11:04.712 "uuid": "bc6c7107-7a22-4978-9a23-c57a6a92122d", 00:11:04.712 "is_configured": true, 00:11:04.712 "data_offset": 0, 00:11:04.712 "data_size": 65536 00:11:04.712 }, 00:11:04.712 { 00:11:04.712 "name": "BaseBdev4", 00:11:04.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.712 "is_configured": false, 00:11:04.712 "data_offset": 0, 00:11:04.712 "data_size": 0 00:11:04.712 } 00:11:04.712 ] 00:11:04.712 }' 00:11:04.712 03:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.712 03:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.972 [2024-11-21 03:19:52.423832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:04.972 [2024-11-21 03:19:52.423978] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:11:04.972 [2024-11-21 03:19:52.423994] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:04.972 [2024-11-21 03:19:52.424281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:11:04.972 [2024-11-21 03:19:52.424433] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:11:04.972 [2024-11-21 03:19:52.424444] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:11:04.972 [2024-11-21 03:19:52.424672] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.972 BaseBdev4 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.972 [ 00:11:04.972 { 00:11:04.972 "name": "BaseBdev4", 00:11:04.972 "aliases": [ 00:11:04.972 "3c0a7f22-eb10-4690-bc2d-e843d641b8d0" 00:11:04.972 ], 00:11:04.972 "product_name": "Malloc disk", 00:11:04.972 "block_size": 512, 00:11:04.972 "num_blocks": 65536, 00:11:04.972 "uuid": "3c0a7f22-eb10-4690-bc2d-e843d641b8d0", 00:11:04.972 "assigned_rate_limits": { 00:11:04.972 "rw_ios_per_sec": 0, 00:11:04.972 "rw_mbytes_per_sec": 0, 00:11:04.972 "r_mbytes_per_sec": 0, 00:11:04.972 "w_mbytes_per_sec": 0 00:11:04.972 }, 00:11:04.972 "claimed": true, 00:11:04.972 "claim_type": "exclusive_write", 00:11:04.972 "zoned": false, 00:11:04.972 "supported_io_types": { 00:11:04.972 "read": true, 00:11:04.972 "write": true, 00:11:04.972 "unmap": true, 00:11:04.972 "flush": true, 00:11:04.972 "reset": true, 00:11:04.972 "nvme_admin": false, 00:11:04.972 "nvme_io": false, 00:11:04.972 "nvme_io_md": false, 00:11:04.972 "write_zeroes": true, 00:11:04.972 "zcopy": true, 00:11:04.972 "get_zone_info": false, 00:11:04.972 "zone_management": false, 00:11:04.972 "zone_append": false, 00:11:04.972 "compare": false, 00:11:04.972 "compare_and_write": false, 00:11:04.972 "abort": true, 00:11:04.972 "seek_hole": false, 00:11:04.972 "seek_data": false, 00:11:04.972 "copy": true, 00:11:04.972 "nvme_iov_md": false 00:11:04.972 }, 00:11:04.972 "memory_domains": [ 00:11:04.972 { 00:11:04.972 "dma_device_id": "system", 00:11:04.972 "dma_device_type": 1 00:11:04.972 }, 00:11:04.972 { 00:11:04.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.972 "dma_device_type": 2 00:11:04.972 } 00:11:04.972 ], 00:11:04.972 "driver_specific": {} 00:11:04.972 } 00:11:04.972 ] 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.972 "name": "Existed_Raid", 00:11:04.972 "uuid": "35059504-8c9b-486a-9db4-14a4443c6f50", 00:11:04.972 "strip_size_kb": 64, 00:11:04.972 "state": "online", 00:11:04.972 "raid_level": "concat", 00:11:04.972 "superblock": false, 00:11:04.972 "num_base_bdevs": 4, 00:11:04.972 "num_base_bdevs_discovered": 4, 00:11:04.972 "num_base_bdevs_operational": 4, 00:11:04.972 "base_bdevs_list": [ 00:11:04.972 { 00:11:04.972 "name": "BaseBdev1", 00:11:04.972 "uuid": "a0316ef4-0316-4e73-bf9a-ad7a5dc86476", 00:11:04.972 "is_configured": true, 00:11:04.972 "data_offset": 0, 00:11:04.972 "data_size": 65536 00:11:04.972 }, 00:11:04.972 { 00:11:04.972 "name": "BaseBdev2", 00:11:04.972 "uuid": "9b95bf34-0eba-4408-b083-446790b4365d", 00:11:04.972 "is_configured": true, 00:11:04.972 "data_offset": 0, 00:11:04.972 "data_size": 65536 00:11:04.972 }, 00:11:04.972 { 00:11:04.972 "name": "BaseBdev3", 00:11:04.972 "uuid": "bc6c7107-7a22-4978-9a23-c57a6a92122d", 00:11:04.972 "is_configured": true, 00:11:04.972 "data_offset": 0, 00:11:04.972 "data_size": 65536 00:11:04.972 }, 00:11:04.972 { 00:11:04.972 "name": "BaseBdev4", 00:11:04.972 "uuid": "3c0a7f22-eb10-4690-bc2d-e843d641b8d0", 00:11:04.972 "is_configured": true, 00:11:04.972 "data_offset": 0, 00:11:04.972 "data_size": 65536 00:11:04.972 } 00:11:04.972 ] 00:11:04.972 }' 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.972 03:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.557 03:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:05.557 03:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:05.557 03:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:05.557 03:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:05.557 03:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:05.557 03:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:05.557 03:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:05.557 03:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:05.557 03:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.557 03:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.557 [2024-11-21 03:19:52.904455] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:05.557 03:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.557 03:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:05.557 "name": "Existed_Raid", 00:11:05.557 "aliases": [ 00:11:05.557 "35059504-8c9b-486a-9db4-14a4443c6f50" 00:11:05.557 ], 00:11:05.557 "product_name": "Raid Volume", 00:11:05.557 "block_size": 512, 00:11:05.557 "num_blocks": 262144, 00:11:05.557 "uuid": "35059504-8c9b-486a-9db4-14a4443c6f50", 00:11:05.557 "assigned_rate_limits": { 00:11:05.557 "rw_ios_per_sec": 0, 00:11:05.557 "rw_mbytes_per_sec": 0, 00:11:05.557 "r_mbytes_per_sec": 0, 00:11:05.557 "w_mbytes_per_sec": 0 00:11:05.557 }, 00:11:05.557 "claimed": false, 00:11:05.557 "zoned": false, 00:11:05.557 "supported_io_types": { 00:11:05.557 "read": true, 00:11:05.557 "write": true, 00:11:05.557 "unmap": true, 00:11:05.557 "flush": true, 00:11:05.557 "reset": true, 00:11:05.557 "nvme_admin": false, 00:11:05.557 "nvme_io": false, 00:11:05.558 "nvme_io_md": false, 00:11:05.558 "write_zeroes": true, 00:11:05.558 "zcopy": false, 00:11:05.558 "get_zone_info": false, 00:11:05.558 "zone_management": false, 00:11:05.558 "zone_append": false, 00:11:05.558 "compare": false, 00:11:05.558 "compare_and_write": false, 00:11:05.558 "abort": false, 00:11:05.558 "seek_hole": false, 00:11:05.558 "seek_data": false, 00:11:05.558 "copy": false, 00:11:05.558 "nvme_iov_md": false 00:11:05.558 }, 00:11:05.558 "memory_domains": [ 00:11:05.558 { 00:11:05.558 "dma_device_id": "system", 00:11:05.558 "dma_device_type": 1 00:11:05.558 }, 00:11:05.558 { 00:11:05.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.558 "dma_device_type": 2 00:11:05.558 }, 00:11:05.558 { 00:11:05.558 "dma_device_id": "system", 00:11:05.558 "dma_device_type": 1 00:11:05.558 }, 00:11:05.558 { 00:11:05.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.558 "dma_device_type": 2 00:11:05.558 }, 00:11:05.558 { 00:11:05.558 "dma_device_id": "system", 00:11:05.558 "dma_device_type": 1 00:11:05.558 }, 00:11:05.558 { 00:11:05.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.558 "dma_device_type": 2 00:11:05.558 }, 00:11:05.558 { 00:11:05.558 "dma_device_id": "system", 00:11:05.558 "dma_device_type": 1 00:11:05.558 }, 00:11:05.558 { 00:11:05.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.558 "dma_device_type": 2 00:11:05.558 } 00:11:05.558 ], 00:11:05.558 "driver_specific": { 00:11:05.558 "raid": { 00:11:05.558 "uuid": "35059504-8c9b-486a-9db4-14a4443c6f50", 00:11:05.558 "strip_size_kb": 64, 00:11:05.558 "state": "online", 00:11:05.558 "raid_level": "concat", 00:11:05.558 "superblock": false, 00:11:05.558 "num_base_bdevs": 4, 00:11:05.558 "num_base_bdevs_discovered": 4, 00:11:05.558 "num_base_bdevs_operational": 4, 00:11:05.558 "base_bdevs_list": [ 00:11:05.558 { 00:11:05.558 "name": "BaseBdev1", 00:11:05.558 "uuid": "a0316ef4-0316-4e73-bf9a-ad7a5dc86476", 00:11:05.558 "is_configured": true, 00:11:05.558 "data_offset": 0, 00:11:05.558 "data_size": 65536 00:11:05.558 }, 00:11:05.558 { 00:11:05.558 "name": "BaseBdev2", 00:11:05.558 "uuid": "9b95bf34-0eba-4408-b083-446790b4365d", 00:11:05.558 "is_configured": true, 00:11:05.558 "data_offset": 0, 00:11:05.558 "data_size": 65536 00:11:05.558 }, 00:11:05.558 { 00:11:05.558 "name": "BaseBdev3", 00:11:05.558 "uuid": "bc6c7107-7a22-4978-9a23-c57a6a92122d", 00:11:05.558 "is_configured": true, 00:11:05.558 "data_offset": 0, 00:11:05.558 "data_size": 65536 00:11:05.558 }, 00:11:05.558 { 00:11:05.558 "name": "BaseBdev4", 00:11:05.558 "uuid": "3c0a7f22-eb10-4690-bc2d-e843d641b8d0", 00:11:05.558 "is_configured": true, 00:11:05.558 "data_offset": 0, 00:11:05.558 "data_size": 65536 00:11:05.558 } 00:11:05.558 ] 00:11:05.558 } 00:11:05.558 } 00:11:05.558 }' 00:11:05.558 03:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:05.558 03:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:05.558 BaseBdev2 00:11:05.558 BaseBdev3 00:11:05.558 BaseBdev4' 00:11:05.558 03:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.558 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:05.558 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.558 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.558 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:05.558 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.558 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.558 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.558 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.558 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.558 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.558 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:05.558 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.558 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.558 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.558 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.558 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.558 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.558 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.558 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:05.558 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.558 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.558 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.822 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.822 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.822 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.822 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.822 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:05.823 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.823 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.823 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.823 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.823 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.823 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.823 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:05.823 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.823 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.823 [2024-11-21 03:19:53.212223] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:05.823 [2024-11-21 03:19:53.212261] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:05.823 [2024-11-21 03:19:53.212324] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:05.823 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.823 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:05.823 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:05.823 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:05.823 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:05.823 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:05.823 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:05.823 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.823 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:05.823 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.823 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.823 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:05.823 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.823 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.823 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.823 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.823 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.823 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.823 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.823 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.823 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.823 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.823 "name": "Existed_Raid", 00:11:05.823 "uuid": "35059504-8c9b-486a-9db4-14a4443c6f50", 00:11:05.823 "strip_size_kb": 64, 00:11:05.823 "state": "offline", 00:11:05.823 "raid_level": "concat", 00:11:05.823 "superblock": false, 00:11:05.823 "num_base_bdevs": 4, 00:11:05.823 "num_base_bdevs_discovered": 3, 00:11:05.823 "num_base_bdevs_operational": 3, 00:11:05.823 "base_bdevs_list": [ 00:11:05.823 { 00:11:05.823 "name": null, 00:11:05.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.823 "is_configured": false, 00:11:05.823 "data_offset": 0, 00:11:05.823 "data_size": 65536 00:11:05.823 }, 00:11:05.823 { 00:11:05.823 "name": "BaseBdev2", 00:11:05.823 "uuid": "9b95bf34-0eba-4408-b083-446790b4365d", 00:11:05.823 "is_configured": true, 00:11:05.823 "data_offset": 0, 00:11:05.823 "data_size": 65536 00:11:05.823 }, 00:11:05.823 { 00:11:05.823 "name": "BaseBdev3", 00:11:05.823 "uuid": "bc6c7107-7a22-4978-9a23-c57a6a92122d", 00:11:05.823 "is_configured": true, 00:11:05.823 "data_offset": 0, 00:11:05.823 "data_size": 65536 00:11:05.823 }, 00:11:05.823 { 00:11:05.823 "name": "BaseBdev4", 00:11:05.823 "uuid": "3c0a7f22-eb10-4690-bc2d-e843d641b8d0", 00:11:05.823 "is_configured": true, 00:11:05.823 "data_offset": 0, 00:11:05.823 "data_size": 65536 00:11:05.823 } 00:11:05.823 ] 00:11:05.823 }' 00:11:05.823 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.823 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.393 [2024-11-21 03:19:53.751963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.393 [2024-11-21 03:19:53.819519] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.393 [2024-11-21 03:19:53.879165] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:06.393 [2024-11-21 03:19:53.879245] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.393 BaseBdev2 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.393 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.654 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.654 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:06.654 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.654 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.654 [ 00:11:06.654 { 00:11:06.654 "name": "BaseBdev2", 00:11:06.654 "aliases": [ 00:11:06.654 "51c2b1db-4c7e-41d0-9587-d074de8d6dcd" 00:11:06.654 ], 00:11:06.654 "product_name": "Malloc disk", 00:11:06.654 "block_size": 512, 00:11:06.654 "num_blocks": 65536, 00:11:06.654 "uuid": "51c2b1db-4c7e-41d0-9587-d074de8d6dcd", 00:11:06.654 "assigned_rate_limits": { 00:11:06.654 "rw_ios_per_sec": 0, 00:11:06.654 "rw_mbytes_per_sec": 0, 00:11:06.654 "r_mbytes_per_sec": 0, 00:11:06.654 "w_mbytes_per_sec": 0 00:11:06.654 }, 00:11:06.654 "claimed": false, 00:11:06.654 "zoned": false, 00:11:06.654 "supported_io_types": { 00:11:06.654 "read": true, 00:11:06.654 "write": true, 00:11:06.654 "unmap": true, 00:11:06.654 "flush": true, 00:11:06.654 "reset": true, 00:11:06.654 "nvme_admin": false, 00:11:06.654 "nvme_io": false, 00:11:06.654 "nvme_io_md": false, 00:11:06.654 "write_zeroes": true, 00:11:06.654 "zcopy": true, 00:11:06.654 "get_zone_info": false, 00:11:06.654 "zone_management": false, 00:11:06.654 "zone_append": false, 00:11:06.654 "compare": false, 00:11:06.654 "compare_and_write": false, 00:11:06.654 "abort": true, 00:11:06.654 "seek_hole": false, 00:11:06.654 "seek_data": false, 00:11:06.654 "copy": true, 00:11:06.654 "nvme_iov_md": false 00:11:06.654 }, 00:11:06.654 "memory_domains": [ 00:11:06.654 { 00:11:06.654 "dma_device_id": "system", 00:11:06.654 "dma_device_type": 1 00:11:06.654 }, 00:11:06.654 { 00:11:06.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.654 "dma_device_type": 2 00:11:06.654 } 00:11:06.654 ], 00:11:06.654 "driver_specific": {} 00:11:06.654 } 00:11:06.654 ] 00:11:06.654 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.654 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:06.654 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:06.654 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:06.654 03:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:06.654 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.654 03:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.654 BaseBdev3 00:11:06.654 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.654 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:06.654 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:06.654 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:06.654 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:06.654 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:06.654 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:06.654 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:06.654 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.654 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.654 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.654 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:06.654 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.654 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.654 [ 00:11:06.654 { 00:11:06.654 "name": "BaseBdev3", 00:11:06.654 "aliases": [ 00:11:06.654 "350fa3cc-17c1-41d4-9fa9-d5329f8dd187" 00:11:06.654 ], 00:11:06.654 "product_name": "Malloc disk", 00:11:06.654 "block_size": 512, 00:11:06.654 "num_blocks": 65536, 00:11:06.655 "uuid": "350fa3cc-17c1-41d4-9fa9-d5329f8dd187", 00:11:06.655 "assigned_rate_limits": { 00:11:06.655 "rw_ios_per_sec": 0, 00:11:06.655 "rw_mbytes_per_sec": 0, 00:11:06.655 "r_mbytes_per_sec": 0, 00:11:06.655 "w_mbytes_per_sec": 0 00:11:06.655 }, 00:11:06.655 "claimed": false, 00:11:06.655 "zoned": false, 00:11:06.655 "supported_io_types": { 00:11:06.655 "read": true, 00:11:06.655 "write": true, 00:11:06.655 "unmap": true, 00:11:06.655 "flush": true, 00:11:06.655 "reset": true, 00:11:06.655 "nvme_admin": false, 00:11:06.655 "nvme_io": false, 00:11:06.655 "nvme_io_md": false, 00:11:06.655 "write_zeroes": true, 00:11:06.655 "zcopy": true, 00:11:06.655 "get_zone_info": false, 00:11:06.655 "zone_management": false, 00:11:06.655 "zone_append": false, 00:11:06.655 "compare": false, 00:11:06.655 "compare_and_write": false, 00:11:06.655 "abort": true, 00:11:06.655 "seek_hole": false, 00:11:06.655 "seek_data": false, 00:11:06.655 "copy": true, 00:11:06.655 "nvme_iov_md": false 00:11:06.655 }, 00:11:06.655 "memory_domains": [ 00:11:06.655 { 00:11:06.655 "dma_device_id": "system", 00:11:06.655 "dma_device_type": 1 00:11:06.655 }, 00:11:06.655 { 00:11:06.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.655 "dma_device_type": 2 00:11:06.655 } 00:11:06.655 ], 00:11:06.655 "driver_specific": {} 00:11:06.655 } 00:11:06.655 ] 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.655 BaseBdev4 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.655 [ 00:11:06.655 { 00:11:06.655 "name": "BaseBdev4", 00:11:06.655 "aliases": [ 00:11:06.655 "69231f12-a72b-4eee-947a-d0a3bf24dbd8" 00:11:06.655 ], 00:11:06.655 "product_name": "Malloc disk", 00:11:06.655 "block_size": 512, 00:11:06.655 "num_blocks": 65536, 00:11:06.655 "uuid": "69231f12-a72b-4eee-947a-d0a3bf24dbd8", 00:11:06.655 "assigned_rate_limits": { 00:11:06.655 "rw_ios_per_sec": 0, 00:11:06.655 "rw_mbytes_per_sec": 0, 00:11:06.655 "r_mbytes_per_sec": 0, 00:11:06.655 "w_mbytes_per_sec": 0 00:11:06.655 }, 00:11:06.655 "claimed": false, 00:11:06.655 "zoned": false, 00:11:06.655 "supported_io_types": { 00:11:06.655 "read": true, 00:11:06.655 "write": true, 00:11:06.655 "unmap": true, 00:11:06.655 "flush": true, 00:11:06.655 "reset": true, 00:11:06.655 "nvme_admin": false, 00:11:06.655 "nvme_io": false, 00:11:06.655 "nvme_io_md": false, 00:11:06.655 "write_zeroes": true, 00:11:06.655 "zcopy": true, 00:11:06.655 "get_zone_info": false, 00:11:06.655 "zone_management": false, 00:11:06.655 "zone_append": false, 00:11:06.655 "compare": false, 00:11:06.655 "compare_and_write": false, 00:11:06.655 "abort": true, 00:11:06.655 "seek_hole": false, 00:11:06.655 "seek_data": false, 00:11:06.655 "copy": true, 00:11:06.655 "nvme_iov_md": false 00:11:06.655 }, 00:11:06.655 "memory_domains": [ 00:11:06.655 { 00:11:06.655 "dma_device_id": "system", 00:11:06.655 "dma_device_type": 1 00:11:06.655 }, 00:11:06.655 { 00:11:06.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.655 "dma_device_type": 2 00:11:06.655 } 00:11:06.655 ], 00:11:06.655 "driver_specific": {} 00:11:06.655 } 00:11:06.655 ] 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.655 [2024-11-21 03:19:54.102185] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:06.655 [2024-11-21 03:19:54.102254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:06.655 [2024-11-21 03:19:54.102277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:06.655 [2024-11-21 03:19:54.104327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:06.655 [2024-11-21 03:19:54.104477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.655 "name": "Existed_Raid", 00:11:06.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.655 "strip_size_kb": 64, 00:11:06.655 "state": "configuring", 00:11:06.655 "raid_level": "concat", 00:11:06.655 "superblock": false, 00:11:06.655 "num_base_bdevs": 4, 00:11:06.655 "num_base_bdevs_discovered": 3, 00:11:06.655 "num_base_bdevs_operational": 4, 00:11:06.655 "base_bdevs_list": [ 00:11:06.655 { 00:11:06.655 "name": "BaseBdev1", 00:11:06.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.655 "is_configured": false, 00:11:06.655 "data_offset": 0, 00:11:06.655 "data_size": 0 00:11:06.655 }, 00:11:06.655 { 00:11:06.655 "name": "BaseBdev2", 00:11:06.655 "uuid": "51c2b1db-4c7e-41d0-9587-d074de8d6dcd", 00:11:06.655 "is_configured": true, 00:11:06.655 "data_offset": 0, 00:11:06.655 "data_size": 65536 00:11:06.655 }, 00:11:06.655 { 00:11:06.655 "name": "BaseBdev3", 00:11:06.655 "uuid": "350fa3cc-17c1-41d4-9fa9-d5329f8dd187", 00:11:06.655 "is_configured": true, 00:11:06.655 "data_offset": 0, 00:11:06.655 "data_size": 65536 00:11:06.655 }, 00:11:06.655 { 00:11:06.655 "name": "BaseBdev4", 00:11:06.655 "uuid": "69231f12-a72b-4eee-947a-d0a3bf24dbd8", 00:11:06.655 "is_configured": true, 00:11:06.655 "data_offset": 0, 00:11:06.655 "data_size": 65536 00:11:06.655 } 00:11:06.655 ] 00:11:06.655 }' 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.655 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.225 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:07.225 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.225 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.225 [2024-11-21 03:19:54.574311] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:07.225 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.225 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:07.225 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.225 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.225 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:07.225 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.225 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.225 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.225 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.225 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.225 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.225 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.225 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.225 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.225 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.225 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.225 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.225 "name": "Existed_Raid", 00:11:07.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.225 "strip_size_kb": 64, 00:11:07.225 "state": "configuring", 00:11:07.225 "raid_level": "concat", 00:11:07.225 "superblock": false, 00:11:07.225 "num_base_bdevs": 4, 00:11:07.225 "num_base_bdevs_discovered": 2, 00:11:07.225 "num_base_bdevs_operational": 4, 00:11:07.225 "base_bdevs_list": [ 00:11:07.225 { 00:11:07.225 "name": "BaseBdev1", 00:11:07.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.225 "is_configured": false, 00:11:07.225 "data_offset": 0, 00:11:07.225 "data_size": 0 00:11:07.225 }, 00:11:07.225 { 00:11:07.225 "name": null, 00:11:07.225 "uuid": "51c2b1db-4c7e-41d0-9587-d074de8d6dcd", 00:11:07.225 "is_configured": false, 00:11:07.225 "data_offset": 0, 00:11:07.225 "data_size": 65536 00:11:07.225 }, 00:11:07.225 { 00:11:07.225 "name": "BaseBdev3", 00:11:07.225 "uuid": "350fa3cc-17c1-41d4-9fa9-d5329f8dd187", 00:11:07.225 "is_configured": true, 00:11:07.225 "data_offset": 0, 00:11:07.225 "data_size": 65536 00:11:07.225 }, 00:11:07.225 { 00:11:07.225 "name": "BaseBdev4", 00:11:07.225 "uuid": "69231f12-a72b-4eee-947a-d0a3bf24dbd8", 00:11:07.225 "is_configured": true, 00:11:07.225 "data_offset": 0, 00:11:07.225 "data_size": 65536 00:11:07.225 } 00:11:07.225 ] 00:11:07.225 }' 00:11:07.225 03:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.225 03:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.794 BaseBdev1 00:11:07.794 [2024-11-21 03:19:55.117606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.794 [ 00:11:07.794 { 00:11:07.794 "name": "BaseBdev1", 00:11:07.794 "aliases": [ 00:11:07.794 "1095d38c-14a9-4baa-aa4d-43b9b95b2e95" 00:11:07.794 ], 00:11:07.794 "product_name": "Malloc disk", 00:11:07.794 "block_size": 512, 00:11:07.794 "num_blocks": 65536, 00:11:07.794 "uuid": "1095d38c-14a9-4baa-aa4d-43b9b95b2e95", 00:11:07.794 "assigned_rate_limits": { 00:11:07.794 "rw_ios_per_sec": 0, 00:11:07.794 "rw_mbytes_per_sec": 0, 00:11:07.794 "r_mbytes_per_sec": 0, 00:11:07.794 "w_mbytes_per_sec": 0 00:11:07.794 }, 00:11:07.794 "claimed": true, 00:11:07.794 "claim_type": "exclusive_write", 00:11:07.794 "zoned": false, 00:11:07.794 "supported_io_types": { 00:11:07.794 "read": true, 00:11:07.794 "write": true, 00:11:07.794 "unmap": true, 00:11:07.794 "flush": true, 00:11:07.794 "reset": true, 00:11:07.794 "nvme_admin": false, 00:11:07.794 "nvme_io": false, 00:11:07.794 "nvme_io_md": false, 00:11:07.794 "write_zeroes": true, 00:11:07.794 "zcopy": true, 00:11:07.794 "get_zone_info": false, 00:11:07.794 "zone_management": false, 00:11:07.794 "zone_append": false, 00:11:07.794 "compare": false, 00:11:07.794 "compare_and_write": false, 00:11:07.794 "abort": true, 00:11:07.794 "seek_hole": false, 00:11:07.794 "seek_data": false, 00:11:07.794 "copy": true, 00:11:07.794 "nvme_iov_md": false 00:11:07.794 }, 00:11:07.794 "memory_domains": [ 00:11:07.794 { 00:11:07.794 "dma_device_id": "system", 00:11:07.794 "dma_device_type": 1 00:11:07.794 }, 00:11:07.794 { 00:11:07.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.794 "dma_device_type": 2 00:11:07.794 } 00:11:07.794 ], 00:11:07.794 "driver_specific": {} 00:11:07.794 } 00:11:07.794 ] 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.794 "name": "Existed_Raid", 00:11:07.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.794 "strip_size_kb": 64, 00:11:07.794 "state": "configuring", 00:11:07.794 "raid_level": "concat", 00:11:07.794 "superblock": false, 00:11:07.794 "num_base_bdevs": 4, 00:11:07.794 "num_base_bdevs_discovered": 3, 00:11:07.794 "num_base_bdevs_operational": 4, 00:11:07.794 "base_bdevs_list": [ 00:11:07.794 { 00:11:07.794 "name": "BaseBdev1", 00:11:07.794 "uuid": "1095d38c-14a9-4baa-aa4d-43b9b95b2e95", 00:11:07.794 "is_configured": true, 00:11:07.794 "data_offset": 0, 00:11:07.794 "data_size": 65536 00:11:07.794 }, 00:11:07.794 { 00:11:07.794 "name": null, 00:11:07.794 "uuid": "51c2b1db-4c7e-41d0-9587-d074de8d6dcd", 00:11:07.794 "is_configured": false, 00:11:07.794 "data_offset": 0, 00:11:07.794 "data_size": 65536 00:11:07.794 }, 00:11:07.794 { 00:11:07.794 "name": "BaseBdev3", 00:11:07.794 "uuid": "350fa3cc-17c1-41d4-9fa9-d5329f8dd187", 00:11:07.794 "is_configured": true, 00:11:07.794 "data_offset": 0, 00:11:07.794 "data_size": 65536 00:11:07.794 }, 00:11:07.794 { 00:11:07.794 "name": "BaseBdev4", 00:11:07.794 "uuid": "69231f12-a72b-4eee-947a-d0a3bf24dbd8", 00:11:07.794 "is_configured": true, 00:11:07.794 "data_offset": 0, 00:11:07.794 "data_size": 65536 00:11:07.794 } 00:11:07.794 ] 00:11:07.794 }' 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.794 03:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.054 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.054 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:08.054 03:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.054 03:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.314 03:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.314 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:08.314 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:08.314 03:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.314 03:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.314 [2024-11-21 03:19:55.669879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:08.314 03:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.314 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:08.314 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.314 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.314 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.314 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.314 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.314 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.314 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.314 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.314 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.314 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.314 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.314 03:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.314 03:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.314 03:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.314 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.314 "name": "Existed_Raid", 00:11:08.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.314 "strip_size_kb": 64, 00:11:08.314 "state": "configuring", 00:11:08.314 "raid_level": "concat", 00:11:08.314 "superblock": false, 00:11:08.314 "num_base_bdevs": 4, 00:11:08.315 "num_base_bdevs_discovered": 2, 00:11:08.315 "num_base_bdevs_operational": 4, 00:11:08.315 "base_bdevs_list": [ 00:11:08.315 { 00:11:08.315 "name": "BaseBdev1", 00:11:08.315 "uuid": "1095d38c-14a9-4baa-aa4d-43b9b95b2e95", 00:11:08.315 "is_configured": true, 00:11:08.315 "data_offset": 0, 00:11:08.315 "data_size": 65536 00:11:08.315 }, 00:11:08.315 { 00:11:08.315 "name": null, 00:11:08.315 "uuid": "51c2b1db-4c7e-41d0-9587-d074de8d6dcd", 00:11:08.315 "is_configured": false, 00:11:08.315 "data_offset": 0, 00:11:08.315 "data_size": 65536 00:11:08.315 }, 00:11:08.315 { 00:11:08.315 "name": null, 00:11:08.315 "uuid": "350fa3cc-17c1-41d4-9fa9-d5329f8dd187", 00:11:08.315 "is_configured": false, 00:11:08.315 "data_offset": 0, 00:11:08.315 "data_size": 65536 00:11:08.315 }, 00:11:08.315 { 00:11:08.315 "name": "BaseBdev4", 00:11:08.315 "uuid": "69231f12-a72b-4eee-947a-d0a3bf24dbd8", 00:11:08.315 "is_configured": true, 00:11:08.315 "data_offset": 0, 00:11:08.315 "data_size": 65536 00:11:08.315 } 00:11:08.315 ] 00:11:08.315 }' 00:11:08.315 03:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.315 03:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.885 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.885 03:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.885 03:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.885 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:08.885 03:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.885 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:08.885 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:08.885 03:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.885 03:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.885 [2024-11-21 03:19:56.194064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:08.885 03:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.885 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:08.885 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.885 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.885 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.885 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.885 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.885 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.885 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.885 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.885 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.885 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.885 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.885 03:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.885 03:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.885 03:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.885 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.885 "name": "Existed_Raid", 00:11:08.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.885 "strip_size_kb": 64, 00:11:08.885 "state": "configuring", 00:11:08.885 "raid_level": "concat", 00:11:08.885 "superblock": false, 00:11:08.885 "num_base_bdevs": 4, 00:11:08.885 "num_base_bdevs_discovered": 3, 00:11:08.885 "num_base_bdevs_operational": 4, 00:11:08.885 "base_bdevs_list": [ 00:11:08.885 { 00:11:08.885 "name": "BaseBdev1", 00:11:08.885 "uuid": "1095d38c-14a9-4baa-aa4d-43b9b95b2e95", 00:11:08.885 "is_configured": true, 00:11:08.885 "data_offset": 0, 00:11:08.885 "data_size": 65536 00:11:08.885 }, 00:11:08.885 { 00:11:08.885 "name": null, 00:11:08.885 "uuid": "51c2b1db-4c7e-41d0-9587-d074de8d6dcd", 00:11:08.885 "is_configured": false, 00:11:08.885 "data_offset": 0, 00:11:08.885 "data_size": 65536 00:11:08.885 }, 00:11:08.885 { 00:11:08.885 "name": "BaseBdev3", 00:11:08.885 "uuid": "350fa3cc-17c1-41d4-9fa9-d5329f8dd187", 00:11:08.885 "is_configured": true, 00:11:08.885 "data_offset": 0, 00:11:08.885 "data_size": 65536 00:11:08.885 }, 00:11:08.885 { 00:11:08.885 "name": "BaseBdev4", 00:11:08.885 "uuid": "69231f12-a72b-4eee-947a-d0a3bf24dbd8", 00:11:08.885 "is_configured": true, 00:11:08.885 "data_offset": 0, 00:11:08.885 "data_size": 65536 00:11:08.885 } 00:11:08.885 ] 00:11:08.885 }' 00:11:08.885 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.885 03:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.145 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.145 03:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.145 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:09.145 03:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.145 03:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.145 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:09.145 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:09.145 03:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.145 03:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.145 [2024-11-21 03:19:56.654182] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:09.145 03:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.145 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:09.145 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.145 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.145 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.145 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.145 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.145 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.145 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.145 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.145 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.145 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.145 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.145 03:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.145 03:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.145 03:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.145 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.145 "name": "Existed_Raid", 00:11:09.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.145 "strip_size_kb": 64, 00:11:09.145 "state": "configuring", 00:11:09.145 "raid_level": "concat", 00:11:09.145 "superblock": false, 00:11:09.145 "num_base_bdevs": 4, 00:11:09.145 "num_base_bdevs_discovered": 2, 00:11:09.145 "num_base_bdevs_operational": 4, 00:11:09.145 "base_bdevs_list": [ 00:11:09.145 { 00:11:09.145 "name": null, 00:11:09.145 "uuid": "1095d38c-14a9-4baa-aa4d-43b9b95b2e95", 00:11:09.145 "is_configured": false, 00:11:09.145 "data_offset": 0, 00:11:09.145 "data_size": 65536 00:11:09.145 }, 00:11:09.145 { 00:11:09.145 "name": null, 00:11:09.145 "uuid": "51c2b1db-4c7e-41d0-9587-d074de8d6dcd", 00:11:09.145 "is_configured": false, 00:11:09.145 "data_offset": 0, 00:11:09.145 "data_size": 65536 00:11:09.145 }, 00:11:09.145 { 00:11:09.145 "name": "BaseBdev3", 00:11:09.145 "uuid": "350fa3cc-17c1-41d4-9fa9-d5329f8dd187", 00:11:09.145 "is_configured": true, 00:11:09.145 "data_offset": 0, 00:11:09.145 "data_size": 65536 00:11:09.145 }, 00:11:09.145 { 00:11:09.145 "name": "BaseBdev4", 00:11:09.145 "uuid": "69231f12-a72b-4eee-947a-d0a3bf24dbd8", 00:11:09.145 "is_configured": true, 00:11:09.145 "data_offset": 0, 00:11:09.145 "data_size": 65536 00:11:09.145 } 00:11:09.145 ] 00:11:09.145 }' 00:11:09.145 03:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.145 03:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.714 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.715 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.715 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.715 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:09.715 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.715 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:09.715 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:09.715 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.715 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.715 [2024-11-21 03:19:57.192948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:09.715 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.715 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:09.715 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.715 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.715 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.715 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.715 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.715 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.715 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.715 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.715 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.715 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.715 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.715 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.715 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.715 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.715 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.715 "name": "Existed_Raid", 00:11:09.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.715 "strip_size_kb": 64, 00:11:09.715 "state": "configuring", 00:11:09.715 "raid_level": "concat", 00:11:09.715 "superblock": false, 00:11:09.715 "num_base_bdevs": 4, 00:11:09.715 "num_base_bdevs_discovered": 3, 00:11:09.715 "num_base_bdevs_operational": 4, 00:11:09.715 "base_bdevs_list": [ 00:11:09.715 { 00:11:09.715 "name": null, 00:11:09.715 "uuid": "1095d38c-14a9-4baa-aa4d-43b9b95b2e95", 00:11:09.715 "is_configured": false, 00:11:09.715 "data_offset": 0, 00:11:09.715 "data_size": 65536 00:11:09.715 }, 00:11:09.715 { 00:11:09.715 "name": "BaseBdev2", 00:11:09.715 "uuid": "51c2b1db-4c7e-41d0-9587-d074de8d6dcd", 00:11:09.715 "is_configured": true, 00:11:09.715 "data_offset": 0, 00:11:09.715 "data_size": 65536 00:11:09.715 }, 00:11:09.715 { 00:11:09.715 "name": "BaseBdev3", 00:11:09.715 "uuid": "350fa3cc-17c1-41d4-9fa9-d5329f8dd187", 00:11:09.715 "is_configured": true, 00:11:09.715 "data_offset": 0, 00:11:09.715 "data_size": 65536 00:11:09.715 }, 00:11:09.715 { 00:11:09.715 "name": "BaseBdev4", 00:11:09.715 "uuid": "69231f12-a72b-4eee-947a-d0a3bf24dbd8", 00:11:09.715 "is_configured": true, 00:11:09.715 "data_offset": 0, 00:11:09.715 "data_size": 65536 00:11:09.715 } 00:11:09.715 ] 00:11:09.715 }' 00:11:09.715 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.715 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.284 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.284 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:10.284 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1095d38c-14a9-4baa-aa4d-43b9b95b2e95 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.285 [2024-11-21 03:19:57.704216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:10.285 [2024-11-21 03:19:57.704265] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:10.285 [2024-11-21 03:19:57.704275] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:10.285 [2024-11-21 03:19:57.704515] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:11:10.285 [2024-11-21 03:19:57.704626] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:10.285 [2024-11-21 03:19:57.704636] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:10.285 [2024-11-21 03:19:57.704823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.285 NewBaseBdev 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.285 [ 00:11:10.285 { 00:11:10.285 "name": "NewBaseBdev", 00:11:10.285 "aliases": [ 00:11:10.285 "1095d38c-14a9-4baa-aa4d-43b9b95b2e95" 00:11:10.285 ], 00:11:10.285 "product_name": "Malloc disk", 00:11:10.285 "block_size": 512, 00:11:10.285 "num_blocks": 65536, 00:11:10.285 "uuid": "1095d38c-14a9-4baa-aa4d-43b9b95b2e95", 00:11:10.285 "assigned_rate_limits": { 00:11:10.285 "rw_ios_per_sec": 0, 00:11:10.285 "rw_mbytes_per_sec": 0, 00:11:10.285 "r_mbytes_per_sec": 0, 00:11:10.285 "w_mbytes_per_sec": 0 00:11:10.285 }, 00:11:10.285 "claimed": true, 00:11:10.285 "claim_type": "exclusive_write", 00:11:10.285 "zoned": false, 00:11:10.285 "supported_io_types": { 00:11:10.285 "read": true, 00:11:10.285 "write": true, 00:11:10.285 "unmap": true, 00:11:10.285 "flush": true, 00:11:10.285 "reset": true, 00:11:10.285 "nvme_admin": false, 00:11:10.285 "nvme_io": false, 00:11:10.285 "nvme_io_md": false, 00:11:10.285 "write_zeroes": true, 00:11:10.285 "zcopy": true, 00:11:10.285 "get_zone_info": false, 00:11:10.285 "zone_management": false, 00:11:10.285 "zone_append": false, 00:11:10.285 "compare": false, 00:11:10.285 "compare_and_write": false, 00:11:10.285 "abort": true, 00:11:10.285 "seek_hole": false, 00:11:10.285 "seek_data": false, 00:11:10.285 "copy": true, 00:11:10.285 "nvme_iov_md": false 00:11:10.285 }, 00:11:10.285 "memory_domains": [ 00:11:10.285 { 00:11:10.285 "dma_device_id": "system", 00:11:10.285 "dma_device_type": 1 00:11:10.285 }, 00:11:10.285 { 00:11:10.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.285 "dma_device_type": 2 00:11:10.285 } 00:11:10.285 ], 00:11:10.285 "driver_specific": {} 00:11:10.285 } 00:11:10.285 ] 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.285 "name": "Existed_Raid", 00:11:10.285 "uuid": "1abe495f-23d3-4dff-9392-d175582dc0f0", 00:11:10.285 "strip_size_kb": 64, 00:11:10.285 "state": "online", 00:11:10.285 "raid_level": "concat", 00:11:10.285 "superblock": false, 00:11:10.285 "num_base_bdevs": 4, 00:11:10.285 "num_base_bdevs_discovered": 4, 00:11:10.285 "num_base_bdevs_operational": 4, 00:11:10.285 "base_bdevs_list": [ 00:11:10.285 { 00:11:10.285 "name": "NewBaseBdev", 00:11:10.285 "uuid": "1095d38c-14a9-4baa-aa4d-43b9b95b2e95", 00:11:10.285 "is_configured": true, 00:11:10.285 "data_offset": 0, 00:11:10.285 "data_size": 65536 00:11:10.285 }, 00:11:10.285 { 00:11:10.285 "name": "BaseBdev2", 00:11:10.285 "uuid": "51c2b1db-4c7e-41d0-9587-d074de8d6dcd", 00:11:10.285 "is_configured": true, 00:11:10.285 "data_offset": 0, 00:11:10.285 "data_size": 65536 00:11:10.285 }, 00:11:10.285 { 00:11:10.285 "name": "BaseBdev3", 00:11:10.285 "uuid": "350fa3cc-17c1-41d4-9fa9-d5329f8dd187", 00:11:10.285 "is_configured": true, 00:11:10.285 "data_offset": 0, 00:11:10.285 "data_size": 65536 00:11:10.285 }, 00:11:10.285 { 00:11:10.285 "name": "BaseBdev4", 00:11:10.285 "uuid": "69231f12-a72b-4eee-947a-d0a3bf24dbd8", 00:11:10.285 "is_configured": true, 00:11:10.285 "data_offset": 0, 00:11:10.285 "data_size": 65536 00:11:10.285 } 00:11:10.285 ] 00:11:10.285 }' 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.285 03:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.854 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:10.854 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:10.854 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:10.854 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:10.854 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:10.854 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:10.854 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:10.854 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:10.854 03:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.854 03:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.854 [2024-11-21 03:19:58.148775] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:10.854 03:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.854 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:10.854 "name": "Existed_Raid", 00:11:10.854 "aliases": [ 00:11:10.854 "1abe495f-23d3-4dff-9392-d175582dc0f0" 00:11:10.854 ], 00:11:10.854 "product_name": "Raid Volume", 00:11:10.854 "block_size": 512, 00:11:10.854 "num_blocks": 262144, 00:11:10.854 "uuid": "1abe495f-23d3-4dff-9392-d175582dc0f0", 00:11:10.854 "assigned_rate_limits": { 00:11:10.854 "rw_ios_per_sec": 0, 00:11:10.854 "rw_mbytes_per_sec": 0, 00:11:10.854 "r_mbytes_per_sec": 0, 00:11:10.854 "w_mbytes_per_sec": 0 00:11:10.854 }, 00:11:10.854 "claimed": false, 00:11:10.854 "zoned": false, 00:11:10.854 "supported_io_types": { 00:11:10.854 "read": true, 00:11:10.854 "write": true, 00:11:10.854 "unmap": true, 00:11:10.854 "flush": true, 00:11:10.854 "reset": true, 00:11:10.854 "nvme_admin": false, 00:11:10.854 "nvme_io": false, 00:11:10.854 "nvme_io_md": false, 00:11:10.854 "write_zeroes": true, 00:11:10.854 "zcopy": false, 00:11:10.854 "get_zone_info": false, 00:11:10.854 "zone_management": false, 00:11:10.854 "zone_append": false, 00:11:10.854 "compare": false, 00:11:10.854 "compare_and_write": false, 00:11:10.854 "abort": false, 00:11:10.854 "seek_hole": false, 00:11:10.854 "seek_data": false, 00:11:10.854 "copy": false, 00:11:10.854 "nvme_iov_md": false 00:11:10.854 }, 00:11:10.854 "memory_domains": [ 00:11:10.854 { 00:11:10.854 "dma_device_id": "system", 00:11:10.854 "dma_device_type": 1 00:11:10.854 }, 00:11:10.854 { 00:11:10.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.854 "dma_device_type": 2 00:11:10.854 }, 00:11:10.854 { 00:11:10.854 "dma_device_id": "system", 00:11:10.854 "dma_device_type": 1 00:11:10.854 }, 00:11:10.854 { 00:11:10.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.854 "dma_device_type": 2 00:11:10.854 }, 00:11:10.854 { 00:11:10.854 "dma_device_id": "system", 00:11:10.854 "dma_device_type": 1 00:11:10.854 }, 00:11:10.854 { 00:11:10.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.854 "dma_device_type": 2 00:11:10.854 }, 00:11:10.854 { 00:11:10.854 "dma_device_id": "system", 00:11:10.854 "dma_device_type": 1 00:11:10.854 }, 00:11:10.854 { 00:11:10.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.854 "dma_device_type": 2 00:11:10.854 } 00:11:10.854 ], 00:11:10.854 "driver_specific": { 00:11:10.854 "raid": { 00:11:10.854 "uuid": "1abe495f-23d3-4dff-9392-d175582dc0f0", 00:11:10.854 "strip_size_kb": 64, 00:11:10.854 "state": "online", 00:11:10.854 "raid_level": "concat", 00:11:10.854 "superblock": false, 00:11:10.854 "num_base_bdevs": 4, 00:11:10.854 "num_base_bdevs_discovered": 4, 00:11:10.855 "num_base_bdevs_operational": 4, 00:11:10.855 "base_bdevs_list": [ 00:11:10.855 { 00:11:10.855 "name": "NewBaseBdev", 00:11:10.855 "uuid": "1095d38c-14a9-4baa-aa4d-43b9b95b2e95", 00:11:10.855 "is_configured": true, 00:11:10.855 "data_offset": 0, 00:11:10.855 "data_size": 65536 00:11:10.855 }, 00:11:10.855 { 00:11:10.855 "name": "BaseBdev2", 00:11:10.855 "uuid": "51c2b1db-4c7e-41d0-9587-d074de8d6dcd", 00:11:10.855 "is_configured": true, 00:11:10.855 "data_offset": 0, 00:11:10.855 "data_size": 65536 00:11:10.855 }, 00:11:10.855 { 00:11:10.855 "name": "BaseBdev3", 00:11:10.855 "uuid": "350fa3cc-17c1-41d4-9fa9-d5329f8dd187", 00:11:10.855 "is_configured": true, 00:11:10.855 "data_offset": 0, 00:11:10.855 "data_size": 65536 00:11:10.855 }, 00:11:10.855 { 00:11:10.855 "name": "BaseBdev4", 00:11:10.855 "uuid": "69231f12-a72b-4eee-947a-d0a3bf24dbd8", 00:11:10.855 "is_configured": true, 00:11:10.855 "data_offset": 0, 00:11:10.855 "data_size": 65536 00:11:10.855 } 00:11:10.855 ] 00:11:10.855 } 00:11:10.855 } 00:11:10.855 }' 00:11:10.855 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:10.855 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:10.855 BaseBdev2 00:11:10.855 BaseBdev3 00:11:10.855 BaseBdev4' 00:11:10.855 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.855 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:10.855 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.855 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:10.855 03:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.855 03:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.855 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.855 03:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.855 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.855 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.855 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.855 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.855 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:10.855 03:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.855 03:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.855 03:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.855 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.855 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.855 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.855 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:10.855 03:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.855 03:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.855 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.118 03:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.118 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.118 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.118 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.118 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.118 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:11.118 03:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.118 03:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.118 03:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.118 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.118 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.118 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:11.118 03:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.118 03:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.118 [2024-11-21 03:19:58.500546] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:11.118 [2024-11-21 03:19:58.500648] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:11.118 [2024-11-21 03:19:58.500746] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:11.118 [2024-11-21 03:19:58.500817] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:11.118 [2024-11-21 03:19:58.500834] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:11.118 03:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.118 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 84212 00:11:11.118 03:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 84212 ']' 00:11:11.118 03:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 84212 00:11:11.118 03:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:11.118 03:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:11.118 03:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84212 00:11:11.118 03:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:11.118 03:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:11.118 03:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84212' 00:11:11.118 killing process with pid 84212 00:11:11.118 03:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 84212 00:11:11.118 [2024-11-21 03:19:58.552441] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:11.118 03:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 84212 00:11:11.118 [2024-11-21 03:19:58.594479] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:11.378 00:11:11.378 real 0m9.894s 00:11:11.378 user 0m16.845s 00:11:11.378 sys 0m2.191s 00:11:11.378 ************************************ 00:11:11.378 END TEST raid_state_function_test 00:11:11.378 ************************************ 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.378 03:19:58 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:11.378 03:19:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:11.378 03:19:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.378 03:19:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:11.378 ************************************ 00:11:11.378 START TEST raid_state_function_test_sb 00:11:11.378 ************************************ 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:11.378 Process raid pid: 84867 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84867 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84867' 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84867 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84867 ']' 00:11:11.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:11.378 03:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.638 [2024-11-21 03:19:58.982115] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:11:11.638 [2024-11-21 03:19:58.982345] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.638 [2024-11-21 03:19:59.120068] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:11.638 [2024-11-21 03:19:59.158805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.638 [2024-11-21 03:19:59.188961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.896 [2024-11-21 03:19:59.231997] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:11.896 [2024-11-21 03:19:59.232137] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.464 03:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.464 03:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:12.464 03:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:12.464 03:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.464 03:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.464 [2024-11-21 03:19:59.839165] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:12.464 [2024-11-21 03:19:59.839315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:12.464 [2024-11-21 03:19:59.839360] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:12.464 [2024-11-21 03:19:59.839372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:12.464 [2024-11-21 03:19:59.839383] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:12.464 [2024-11-21 03:19:59.839390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:12.464 [2024-11-21 03:19:59.839398] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:12.464 [2024-11-21 03:19:59.839406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:12.464 03:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.464 03:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:12.464 03:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.464 03:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.464 03:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.464 03:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.464 03:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.464 03:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.464 03:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.464 03:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.464 03:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.464 03:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.464 03:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.464 03:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.464 03:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.464 03:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.464 03:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.464 "name": "Existed_Raid", 00:11:12.464 "uuid": "e5e81657-1469-443d-9f5b-7583b0229ead", 00:11:12.464 "strip_size_kb": 64, 00:11:12.464 "state": "configuring", 00:11:12.464 "raid_level": "concat", 00:11:12.464 "superblock": true, 00:11:12.464 "num_base_bdevs": 4, 00:11:12.464 "num_base_bdevs_discovered": 0, 00:11:12.464 "num_base_bdevs_operational": 4, 00:11:12.464 "base_bdevs_list": [ 00:11:12.464 { 00:11:12.464 "name": "BaseBdev1", 00:11:12.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.464 "is_configured": false, 00:11:12.464 "data_offset": 0, 00:11:12.464 "data_size": 0 00:11:12.464 }, 00:11:12.464 { 00:11:12.464 "name": "BaseBdev2", 00:11:12.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.464 "is_configured": false, 00:11:12.464 "data_offset": 0, 00:11:12.464 "data_size": 0 00:11:12.464 }, 00:11:12.464 { 00:11:12.464 "name": "BaseBdev3", 00:11:12.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.464 "is_configured": false, 00:11:12.464 "data_offset": 0, 00:11:12.464 "data_size": 0 00:11:12.464 }, 00:11:12.464 { 00:11:12.464 "name": "BaseBdev4", 00:11:12.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.464 "is_configured": false, 00:11:12.464 "data_offset": 0, 00:11:12.464 "data_size": 0 00:11:12.464 } 00:11:12.464 ] 00:11:12.464 }' 00:11:12.464 03:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.464 03:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.032 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:13.032 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.032 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.032 [2024-11-21 03:20:00.307170] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:13.032 [2024-11-21 03:20:00.307312] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:11:13.032 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.032 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:13.032 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.032 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.032 [2024-11-21 03:20:00.315199] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:13.032 [2024-11-21 03:20:00.315301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:13.032 [2024-11-21 03:20:00.315330] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:13.032 [2024-11-21 03:20:00.315353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:13.032 [2024-11-21 03:20:00.315373] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:13.032 [2024-11-21 03:20:00.315394] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:13.032 [2024-11-21 03:20:00.315414] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:13.032 [2024-11-21 03:20:00.315435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:13.032 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.033 [2024-11-21 03:20:00.332194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:13.033 BaseBdev1 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.033 [ 00:11:13.033 { 00:11:13.033 "name": "BaseBdev1", 00:11:13.033 "aliases": [ 00:11:13.033 "b94189b1-3268-4f3d-8e24-1dca66354905" 00:11:13.033 ], 00:11:13.033 "product_name": "Malloc disk", 00:11:13.033 "block_size": 512, 00:11:13.033 "num_blocks": 65536, 00:11:13.033 "uuid": "b94189b1-3268-4f3d-8e24-1dca66354905", 00:11:13.033 "assigned_rate_limits": { 00:11:13.033 "rw_ios_per_sec": 0, 00:11:13.033 "rw_mbytes_per_sec": 0, 00:11:13.033 "r_mbytes_per_sec": 0, 00:11:13.033 "w_mbytes_per_sec": 0 00:11:13.033 }, 00:11:13.033 "claimed": true, 00:11:13.033 "claim_type": "exclusive_write", 00:11:13.033 "zoned": false, 00:11:13.033 "supported_io_types": { 00:11:13.033 "read": true, 00:11:13.033 "write": true, 00:11:13.033 "unmap": true, 00:11:13.033 "flush": true, 00:11:13.033 "reset": true, 00:11:13.033 "nvme_admin": false, 00:11:13.033 "nvme_io": false, 00:11:13.033 "nvme_io_md": false, 00:11:13.033 "write_zeroes": true, 00:11:13.033 "zcopy": true, 00:11:13.033 "get_zone_info": false, 00:11:13.033 "zone_management": false, 00:11:13.033 "zone_append": false, 00:11:13.033 "compare": false, 00:11:13.033 "compare_and_write": false, 00:11:13.033 "abort": true, 00:11:13.033 "seek_hole": false, 00:11:13.033 "seek_data": false, 00:11:13.033 "copy": true, 00:11:13.033 "nvme_iov_md": false 00:11:13.033 }, 00:11:13.033 "memory_domains": [ 00:11:13.033 { 00:11:13.033 "dma_device_id": "system", 00:11:13.033 "dma_device_type": 1 00:11:13.033 }, 00:11:13.033 { 00:11:13.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.033 "dma_device_type": 2 00:11:13.033 } 00:11:13.033 ], 00:11:13.033 "driver_specific": {} 00:11:13.033 } 00:11:13.033 ] 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.033 "name": "Existed_Raid", 00:11:13.033 "uuid": "fdb96aff-cc3c-45d6-b5d3-74ef60496341", 00:11:13.033 "strip_size_kb": 64, 00:11:13.033 "state": "configuring", 00:11:13.033 "raid_level": "concat", 00:11:13.033 "superblock": true, 00:11:13.033 "num_base_bdevs": 4, 00:11:13.033 "num_base_bdevs_discovered": 1, 00:11:13.033 "num_base_bdevs_operational": 4, 00:11:13.033 "base_bdevs_list": [ 00:11:13.033 { 00:11:13.033 "name": "BaseBdev1", 00:11:13.033 "uuid": "b94189b1-3268-4f3d-8e24-1dca66354905", 00:11:13.033 "is_configured": true, 00:11:13.033 "data_offset": 2048, 00:11:13.033 "data_size": 63488 00:11:13.033 }, 00:11:13.033 { 00:11:13.033 "name": "BaseBdev2", 00:11:13.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.033 "is_configured": false, 00:11:13.033 "data_offset": 0, 00:11:13.033 "data_size": 0 00:11:13.033 }, 00:11:13.033 { 00:11:13.033 "name": "BaseBdev3", 00:11:13.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.033 "is_configured": false, 00:11:13.033 "data_offset": 0, 00:11:13.033 "data_size": 0 00:11:13.033 }, 00:11:13.033 { 00:11:13.033 "name": "BaseBdev4", 00:11:13.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.033 "is_configured": false, 00:11:13.033 "data_offset": 0, 00:11:13.033 "data_size": 0 00:11:13.033 } 00:11:13.033 ] 00:11:13.033 }' 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.033 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.293 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:13.293 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.293 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.293 [2024-11-21 03:20:00.816403] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:13.293 [2024-11-21 03:20:00.816493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:13.293 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.293 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:13.293 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.293 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.293 [2024-11-21 03:20:00.824445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:13.293 [2024-11-21 03:20:00.826369] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:13.293 [2024-11-21 03:20:00.826468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:13.293 [2024-11-21 03:20:00.826482] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:13.293 [2024-11-21 03:20:00.826490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:13.293 [2024-11-21 03:20:00.826497] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:13.293 [2024-11-21 03:20:00.826505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:13.293 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.293 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:13.293 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:13.293 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:13.293 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.293 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.293 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:13.293 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.293 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.293 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.293 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.293 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.293 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.293 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.293 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.293 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.293 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.293 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.597 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.597 "name": "Existed_Raid", 00:11:13.597 "uuid": "abfcecc0-7e51-4385-843f-9124ff5c3f5f", 00:11:13.597 "strip_size_kb": 64, 00:11:13.597 "state": "configuring", 00:11:13.597 "raid_level": "concat", 00:11:13.597 "superblock": true, 00:11:13.597 "num_base_bdevs": 4, 00:11:13.597 "num_base_bdevs_discovered": 1, 00:11:13.597 "num_base_bdevs_operational": 4, 00:11:13.597 "base_bdevs_list": [ 00:11:13.597 { 00:11:13.597 "name": "BaseBdev1", 00:11:13.597 "uuid": "b94189b1-3268-4f3d-8e24-1dca66354905", 00:11:13.597 "is_configured": true, 00:11:13.597 "data_offset": 2048, 00:11:13.597 "data_size": 63488 00:11:13.597 }, 00:11:13.597 { 00:11:13.597 "name": "BaseBdev2", 00:11:13.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.597 "is_configured": false, 00:11:13.597 "data_offset": 0, 00:11:13.597 "data_size": 0 00:11:13.597 }, 00:11:13.597 { 00:11:13.597 "name": "BaseBdev3", 00:11:13.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.597 "is_configured": false, 00:11:13.597 "data_offset": 0, 00:11:13.597 "data_size": 0 00:11:13.597 }, 00:11:13.597 { 00:11:13.597 "name": "BaseBdev4", 00:11:13.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.598 "is_configured": false, 00:11:13.598 "data_offset": 0, 00:11:13.598 "data_size": 0 00:11:13.598 } 00:11:13.598 ] 00:11:13.598 }' 00:11:13.598 03:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.598 03:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.858 [2024-11-21 03:20:01.291713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:13.858 BaseBdev2 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.858 [ 00:11:13.858 { 00:11:13.858 "name": "BaseBdev2", 00:11:13.858 "aliases": [ 00:11:13.858 "bff4eb3b-d13a-4f96-954a-b9d4ec45262f" 00:11:13.858 ], 00:11:13.858 "product_name": "Malloc disk", 00:11:13.858 "block_size": 512, 00:11:13.858 "num_blocks": 65536, 00:11:13.858 "uuid": "bff4eb3b-d13a-4f96-954a-b9d4ec45262f", 00:11:13.858 "assigned_rate_limits": { 00:11:13.858 "rw_ios_per_sec": 0, 00:11:13.858 "rw_mbytes_per_sec": 0, 00:11:13.858 "r_mbytes_per_sec": 0, 00:11:13.858 "w_mbytes_per_sec": 0 00:11:13.858 }, 00:11:13.858 "claimed": true, 00:11:13.858 "claim_type": "exclusive_write", 00:11:13.858 "zoned": false, 00:11:13.858 "supported_io_types": { 00:11:13.858 "read": true, 00:11:13.858 "write": true, 00:11:13.858 "unmap": true, 00:11:13.858 "flush": true, 00:11:13.858 "reset": true, 00:11:13.858 "nvme_admin": false, 00:11:13.858 "nvme_io": false, 00:11:13.858 "nvme_io_md": false, 00:11:13.858 "write_zeroes": true, 00:11:13.858 "zcopy": true, 00:11:13.858 "get_zone_info": false, 00:11:13.858 "zone_management": false, 00:11:13.858 "zone_append": false, 00:11:13.858 "compare": false, 00:11:13.858 "compare_and_write": false, 00:11:13.858 "abort": true, 00:11:13.858 "seek_hole": false, 00:11:13.858 "seek_data": false, 00:11:13.858 "copy": true, 00:11:13.858 "nvme_iov_md": false 00:11:13.858 }, 00:11:13.858 "memory_domains": [ 00:11:13.858 { 00:11:13.858 "dma_device_id": "system", 00:11:13.858 "dma_device_type": 1 00:11:13.858 }, 00:11:13.858 { 00:11:13.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.858 "dma_device_type": 2 00:11:13.858 } 00:11:13.858 ], 00:11:13.858 "driver_specific": {} 00:11:13.858 } 00:11:13.858 ] 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.858 "name": "Existed_Raid", 00:11:13.858 "uuid": "abfcecc0-7e51-4385-843f-9124ff5c3f5f", 00:11:13.858 "strip_size_kb": 64, 00:11:13.858 "state": "configuring", 00:11:13.858 "raid_level": "concat", 00:11:13.858 "superblock": true, 00:11:13.858 "num_base_bdevs": 4, 00:11:13.858 "num_base_bdevs_discovered": 2, 00:11:13.858 "num_base_bdevs_operational": 4, 00:11:13.858 "base_bdevs_list": [ 00:11:13.858 { 00:11:13.858 "name": "BaseBdev1", 00:11:13.858 "uuid": "b94189b1-3268-4f3d-8e24-1dca66354905", 00:11:13.858 "is_configured": true, 00:11:13.858 "data_offset": 2048, 00:11:13.858 "data_size": 63488 00:11:13.858 }, 00:11:13.858 { 00:11:13.858 "name": "BaseBdev2", 00:11:13.858 "uuid": "bff4eb3b-d13a-4f96-954a-b9d4ec45262f", 00:11:13.858 "is_configured": true, 00:11:13.858 "data_offset": 2048, 00:11:13.858 "data_size": 63488 00:11:13.858 }, 00:11:13.858 { 00:11:13.858 "name": "BaseBdev3", 00:11:13.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.858 "is_configured": false, 00:11:13.858 "data_offset": 0, 00:11:13.858 "data_size": 0 00:11:13.858 }, 00:11:13.858 { 00:11:13.858 "name": "BaseBdev4", 00:11:13.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.858 "is_configured": false, 00:11:13.858 "data_offset": 0, 00:11:13.858 "data_size": 0 00:11:13.858 } 00:11:13.858 ] 00:11:13.858 }' 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.858 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.427 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:14.427 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.427 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.427 [2024-11-21 03:20:01.822092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:14.427 BaseBdev3 00:11:14.427 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.427 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:14.427 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:14.427 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:14.427 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:14.427 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:14.427 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:14.427 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:14.427 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.427 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.427 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.427 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:14.427 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.427 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.427 [ 00:11:14.427 { 00:11:14.427 "name": "BaseBdev3", 00:11:14.427 "aliases": [ 00:11:14.427 "0c136130-af49-42a8-be0d-34914e94f350" 00:11:14.427 ], 00:11:14.427 "product_name": "Malloc disk", 00:11:14.427 "block_size": 512, 00:11:14.427 "num_blocks": 65536, 00:11:14.427 "uuid": "0c136130-af49-42a8-be0d-34914e94f350", 00:11:14.427 "assigned_rate_limits": { 00:11:14.427 "rw_ios_per_sec": 0, 00:11:14.427 "rw_mbytes_per_sec": 0, 00:11:14.427 "r_mbytes_per_sec": 0, 00:11:14.427 "w_mbytes_per_sec": 0 00:11:14.427 }, 00:11:14.427 "claimed": true, 00:11:14.427 "claim_type": "exclusive_write", 00:11:14.427 "zoned": false, 00:11:14.427 "supported_io_types": { 00:11:14.427 "read": true, 00:11:14.427 "write": true, 00:11:14.427 "unmap": true, 00:11:14.427 "flush": true, 00:11:14.427 "reset": true, 00:11:14.427 "nvme_admin": false, 00:11:14.427 "nvme_io": false, 00:11:14.427 "nvme_io_md": false, 00:11:14.427 "write_zeroes": true, 00:11:14.427 "zcopy": true, 00:11:14.427 "get_zone_info": false, 00:11:14.427 "zone_management": false, 00:11:14.427 "zone_append": false, 00:11:14.427 "compare": false, 00:11:14.427 "compare_and_write": false, 00:11:14.427 "abort": true, 00:11:14.427 "seek_hole": false, 00:11:14.427 "seek_data": false, 00:11:14.427 "copy": true, 00:11:14.427 "nvme_iov_md": false 00:11:14.427 }, 00:11:14.427 "memory_domains": [ 00:11:14.427 { 00:11:14.427 "dma_device_id": "system", 00:11:14.427 "dma_device_type": 1 00:11:14.427 }, 00:11:14.427 { 00:11:14.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.427 "dma_device_type": 2 00:11:14.427 } 00:11:14.427 ], 00:11:14.427 "driver_specific": {} 00:11:14.427 } 00:11:14.427 ] 00:11:14.427 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.427 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:14.427 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:14.427 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:14.427 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:14.427 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.428 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.428 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.428 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.428 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.428 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.428 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.428 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.428 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.428 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.428 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.428 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.428 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.428 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.428 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.428 "name": "Existed_Raid", 00:11:14.428 "uuid": "abfcecc0-7e51-4385-843f-9124ff5c3f5f", 00:11:14.428 "strip_size_kb": 64, 00:11:14.428 "state": "configuring", 00:11:14.428 "raid_level": "concat", 00:11:14.428 "superblock": true, 00:11:14.428 "num_base_bdevs": 4, 00:11:14.428 "num_base_bdevs_discovered": 3, 00:11:14.428 "num_base_bdevs_operational": 4, 00:11:14.428 "base_bdevs_list": [ 00:11:14.428 { 00:11:14.428 "name": "BaseBdev1", 00:11:14.428 "uuid": "b94189b1-3268-4f3d-8e24-1dca66354905", 00:11:14.428 "is_configured": true, 00:11:14.428 "data_offset": 2048, 00:11:14.428 "data_size": 63488 00:11:14.428 }, 00:11:14.428 { 00:11:14.428 "name": "BaseBdev2", 00:11:14.428 "uuid": "bff4eb3b-d13a-4f96-954a-b9d4ec45262f", 00:11:14.428 "is_configured": true, 00:11:14.428 "data_offset": 2048, 00:11:14.428 "data_size": 63488 00:11:14.428 }, 00:11:14.428 { 00:11:14.428 "name": "BaseBdev3", 00:11:14.428 "uuid": "0c136130-af49-42a8-be0d-34914e94f350", 00:11:14.428 "is_configured": true, 00:11:14.428 "data_offset": 2048, 00:11:14.428 "data_size": 63488 00:11:14.428 }, 00:11:14.428 { 00:11:14.428 "name": "BaseBdev4", 00:11:14.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.428 "is_configured": false, 00:11:14.428 "data_offset": 0, 00:11:14.428 "data_size": 0 00:11:14.428 } 00:11:14.428 ] 00:11:14.428 }' 00:11:14.428 03:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.428 03:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.998 [2024-11-21 03:20:02.341446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:14.998 [2024-11-21 03:20:02.341662] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:11:14.998 [2024-11-21 03:20:02.341689] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:14.998 BaseBdev4 00:11:14.998 [2024-11-21 03:20:02.341961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:11:14.998 [2024-11-21 03:20:02.342112] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:11:14.998 [2024-11-21 03:20:02.342123] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:11:14.998 [2024-11-21 03:20:02.342253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.998 [ 00:11:14.998 { 00:11:14.998 "name": "BaseBdev4", 00:11:14.998 "aliases": [ 00:11:14.998 "7d692c64-e509-495d-a1c0-17acba8e9568" 00:11:14.998 ], 00:11:14.998 "product_name": "Malloc disk", 00:11:14.998 "block_size": 512, 00:11:14.998 "num_blocks": 65536, 00:11:14.998 "uuid": "7d692c64-e509-495d-a1c0-17acba8e9568", 00:11:14.998 "assigned_rate_limits": { 00:11:14.998 "rw_ios_per_sec": 0, 00:11:14.998 "rw_mbytes_per_sec": 0, 00:11:14.998 "r_mbytes_per_sec": 0, 00:11:14.998 "w_mbytes_per_sec": 0 00:11:14.998 }, 00:11:14.998 "claimed": true, 00:11:14.998 "claim_type": "exclusive_write", 00:11:14.998 "zoned": false, 00:11:14.998 "supported_io_types": { 00:11:14.998 "read": true, 00:11:14.998 "write": true, 00:11:14.998 "unmap": true, 00:11:14.998 "flush": true, 00:11:14.998 "reset": true, 00:11:14.998 "nvme_admin": false, 00:11:14.998 "nvme_io": false, 00:11:14.998 "nvme_io_md": false, 00:11:14.998 "write_zeroes": true, 00:11:14.998 "zcopy": true, 00:11:14.998 "get_zone_info": false, 00:11:14.998 "zone_management": false, 00:11:14.998 "zone_append": false, 00:11:14.998 "compare": false, 00:11:14.998 "compare_and_write": false, 00:11:14.998 "abort": true, 00:11:14.998 "seek_hole": false, 00:11:14.998 "seek_data": false, 00:11:14.998 "copy": true, 00:11:14.998 "nvme_iov_md": false 00:11:14.998 }, 00:11:14.998 "memory_domains": [ 00:11:14.998 { 00:11:14.998 "dma_device_id": "system", 00:11:14.998 "dma_device_type": 1 00:11:14.998 }, 00:11:14.998 { 00:11:14.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.998 "dma_device_type": 2 00:11:14.998 } 00:11:14.998 ], 00:11:14.998 "driver_specific": {} 00:11:14.998 } 00:11:14.998 ] 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.998 "name": "Existed_Raid", 00:11:14.998 "uuid": "abfcecc0-7e51-4385-843f-9124ff5c3f5f", 00:11:14.998 "strip_size_kb": 64, 00:11:14.998 "state": "online", 00:11:14.998 "raid_level": "concat", 00:11:14.998 "superblock": true, 00:11:14.998 "num_base_bdevs": 4, 00:11:14.998 "num_base_bdevs_discovered": 4, 00:11:14.998 "num_base_bdevs_operational": 4, 00:11:14.998 "base_bdevs_list": [ 00:11:14.998 { 00:11:14.998 "name": "BaseBdev1", 00:11:14.998 "uuid": "b94189b1-3268-4f3d-8e24-1dca66354905", 00:11:14.998 "is_configured": true, 00:11:14.998 "data_offset": 2048, 00:11:14.998 "data_size": 63488 00:11:14.998 }, 00:11:14.998 { 00:11:14.998 "name": "BaseBdev2", 00:11:14.998 "uuid": "bff4eb3b-d13a-4f96-954a-b9d4ec45262f", 00:11:14.998 "is_configured": true, 00:11:14.998 "data_offset": 2048, 00:11:14.998 "data_size": 63488 00:11:14.998 }, 00:11:14.998 { 00:11:14.998 "name": "BaseBdev3", 00:11:14.998 "uuid": "0c136130-af49-42a8-be0d-34914e94f350", 00:11:14.998 "is_configured": true, 00:11:14.998 "data_offset": 2048, 00:11:14.998 "data_size": 63488 00:11:14.998 }, 00:11:14.998 { 00:11:14.998 "name": "BaseBdev4", 00:11:14.998 "uuid": "7d692c64-e509-495d-a1c0-17acba8e9568", 00:11:14.998 "is_configured": true, 00:11:14.998 "data_offset": 2048, 00:11:14.998 "data_size": 63488 00:11:14.998 } 00:11:14.998 ] 00:11:14.998 }' 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.998 03:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.568 03:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:15.568 03:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:15.568 03:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:15.568 03:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:15.568 03:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:15.568 03:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:15.568 03:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:15.568 03:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:15.568 03:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.568 03:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.568 [2024-11-21 03:20:02.873998] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:15.568 03:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.568 03:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:15.568 "name": "Existed_Raid", 00:11:15.568 "aliases": [ 00:11:15.568 "abfcecc0-7e51-4385-843f-9124ff5c3f5f" 00:11:15.568 ], 00:11:15.568 "product_name": "Raid Volume", 00:11:15.568 "block_size": 512, 00:11:15.568 "num_blocks": 253952, 00:11:15.568 "uuid": "abfcecc0-7e51-4385-843f-9124ff5c3f5f", 00:11:15.568 "assigned_rate_limits": { 00:11:15.568 "rw_ios_per_sec": 0, 00:11:15.568 "rw_mbytes_per_sec": 0, 00:11:15.568 "r_mbytes_per_sec": 0, 00:11:15.568 "w_mbytes_per_sec": 0 00:11:15.568 }, 00:11:15.568 "claimed": false, 00:11:15.568 "zoned": false, 00:11:15.568 "supported_io_types": { 00:11:15.568 "read": true, 00:11:15.568 "write": true, 00:11:15.568 "unmap": true, 00:11:15.568 "flush": true, 00:11:15.568 "reset": true, 00:11:15.568 "nvme_admin": false, 00:11:15.568 "nvme_io": false, 00:11:15.568 "nvme_io_md": false, 00:11:15.568 "write_zeroes": true, 00:11:15.568 "zcopy": false, 00:11:15.568 "get_zone_info": false, 00:11:15.568 "zone_management": false, 00:11:15.568 "zone_append": false, 00:11:15.568 "compare": false, 00:11:15.568 "compare_and_write": false, 00:11:15.568 "abort": false, 00:11:15.568 "seek_hole": false, 00:11:15.568 "seek_data": false, 00:11:15.568 "copy": false, 00:11:15.568 "nvme_iov_md": false 00:11:15.568 }, 00:11:15.568 "memory_domains": [ 00:11:15.568 { 00:11:15.568 "dma_device_id": "system", 00:11:15.568 "dma_device_type": 1 00:11:15.568 }, 00:11:15.568 { 00:11:15.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.568 "dma_device_type": 2 00:11:15.568 }, 00:11:15.568 { 00:11:15.568 "dma_device_id": "system", 00:11:15.568 "dma_device_type": 1 00:11:15.568 }, 00:11:15.568 { 00:11:15.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.568 "dma_device_type": 2 00:11:15.568 }, 00:11:15.568 { 00:11:15.568 "dma_device_id": "system", 00:11:15.568 "dma_device_type": 1 00:11:15.568 }, 00:11:15.568 { 00:11:15.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.568 "dma_device_type": 2 00:11:15.568 }, 00:11:15.568 { 00:11:15.568 "dma_device_id": "system", 00:11:15.568 "dma_device_type": 1 00:11:15.568 }, 00:11:15.568 { 00:11:15.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.568 "dma_device_type": 2 00:11:15.568 } 00:11:15.568 ], 00:11:15.568 "driver_specific": { 00:11:15.568 "raid": { 00:11:15.568 "uuid": "abfcecc0-7e51-4385-843f-9124ff5c3f5f", 00:11:15.568 "strip_size_kb": 64, 00:11:15.568 "state": "online", 00:11:15.568 "raid_level": "concat", 00:11:15.568 "superblock": true, 00:11:15.568 "num_base_bdevs": 4, 00:11:15.568 "num_base_bdevs_discovered": 4, 00:11:15.568 "num_base_bdevs_operational": 4, 00:11:15.568 "base_bdevs_list": [ 00:11:15.568 { 00:11:15.568 "name": "BaseBdev1", 00:11:15.568 "uuid": "b94189b1-3268-4f3d-8e24-1dca66354905", 00:11:15.568 "is_configured": true, 00:11:15.568 "data_offset": 2048, 00:11:15.568 "data_size": 63488 00:11:15.568 }, 00:11:15.568 { 00:11:15.568 "name": "BaseBdev2", 00:11:15.568 "uuid": "bff4eb3b-d13a-4f96-954a-b9d4ec45262f", 00:11:15.568 "is_configured": true, 00:11:15.568 "data_offset": 2048, 00:11:15.568 "data_size": 63488 00:11:15.568 }, 00:11:15.568 { 00:11:15.568 "name": "BaseBdev3", 00:11:15.568 "uuid": "0c136130-af49-42a8-be0d-34914e94f350", 00:11:15.568 "is_configured": true, 00:11:15.568 "data_offset": 2048, 00:11:15.568 "data_size": 63488 00:11:15.568 }, 00:11:15.568 { 00:11:15.568 "name": "BaseBdev4", 00:11:15.568 "uuid": "7d692c64-e509-495d-a1c0-17acba8e9568", 00:11:15.568 "is_configured": true, 00:11:15.568 "data_offset": 2048, 00:11:15.569 "data_size": 63488 00:11:15.569 } 00:11:15.569 ] 00:11:15.569 } 00:11:15.569 } 00:11:15.569 }' 00:11:15.569 03:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:15.569 03:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:15.569 BaseBdev2 00:11:15.569 BaseBdev3 00:11:15.569 BaseBdev4' 00:11:15.569 03:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.569 03:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:15.569 03:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.569 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:15.569 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.569 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.569 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.569 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.569 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.569 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.569 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.569 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.569 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:15.569 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.569 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.569 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.569 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.569 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.569 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.569 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:15.569 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.569 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.569 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.569 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.569 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.569 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.569 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.829 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.829 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:15.829 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.829 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.829 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.829 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.829 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.829 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:15.829 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.829 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.829 [2024-11-21 03:20:03.189842] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:15.829 [2024-11-21 03:20:03.189958] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:15.829 [2024-11-21 03:20:03.190056] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:15.829 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.829 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:15.829 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:15.829 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:15.829 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:15.829 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:15.829 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:15.829 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.829 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:15.829 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:15.829 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.829 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:15.829 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.829 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.829 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.829 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.829 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.829 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.829 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.829 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.829 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.829 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.829 "name": "Existed_Raid", 00:11:15.829 "uuid": "abfcecc0-7e51-4385-843f-9124ff5c3f5f", 00:11:15.829 "strip_size_kb": 64, 00:11:15.829 "state": "offline", 00:11:15.829 "raid_level": "concat", 00:11:15.829 "superblock": true, 00:11:15.829 "num_base_bdevs": 4, 00:11:15.829 "num_base_bdevs_discovered": 3, 00:11:15.829 "num_base_bdevs_operational": 3, 00:11:15.829 "base_bdevs_list": [ 00:11:15.829 { 00:11:15.829 "name": null, 00:11:15.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.829 "is_configured": false, 00:11:15.829 "data_offset": 0, 00:11:15.829 "data_size": 63488 00:11:15.829 }, 00:11:15.829 { 00:11:15.829 "name": "BaseBdev2", 00:11:15.829 "uuid": "bff4eb3b-d13a-4f96-954a-b9d4ec45262f", 00:11:15.829 "is_configured": true, 00:11:15.829 "data_offset": 2048, 00:11:15.829 "data_size": 63488 00:11:15.829 }, 00:11:15.829 { 00:11:15.829 "name": "BaseBdev3", 00:11:15.829 "uuid": "0c136130-af49-42a8-be0d-34914e94f350", 00:11:15.829 "is_configured": true, 00:11:15.829 "data_offset": 2048, 00:11:15.829 "data_size": 63488 00:11:15.829 }, 00:11:15.829 { 00:11:15.829 "name": "BaseBdev4", 00:11:15.829 "uuid": "7d692c64-e509-495d-a1c0-17acba8e9568", 00:11:15.829 "is_configured": true, 00:11:15.829 "data_offset": 2048, 00:11:15.829 "data_size": 63488 00:11:15.829 } 00:11:15.829 ] 00:11:15.829 }' 00:11:15.829 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.829 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.089 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:16.089 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:16.089 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.089 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.089 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.348 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:16.348 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.348 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:16.348 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.349 [2024-11-21 03:20:03.705578] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.349 [2024-11-21 03:20:03.773064] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.349 [2024-11-21 03:20:03.856477] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:16.349 [2024-11-21 03:20:03.856554] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:16.349 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.609 BaseBdev2 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.609 [ 00:11:16.609 { 00:11:16.609 "name": "BaseBdev2", 00:11:16.609 "aliases": [ 00:11:16.609 "f8568ac4-4021-4a76-afa0-72e2eb1b881f" 00:11:16.609 ], 00:11:16.609 "product_name": "Malloc disk", 00:11:16.609 "block_size": 512, 00:11:16.609 "num_blocks": 65536, 00:11:16.609 "uuid": "f8568ac4-4021-4a76-afa0-72e2eb1b881f", 00:11:16.609 "assigned_rate_limits": { 00:11:16.609 "rw_ios_per_sec": 0, 00:11:16.609 "rw_mbytes_per_sec": 0, 00:11:16.609 "r_mbytes_per_sec": 0, 00:11:16.609 "w_mbytes_per_sec": 0 00:11:16.609 }, 00:11:16.609 "claimed": false, 00:11:16.609 "zoned": false, 00:11:16.609 "supported_io_types": { 00:11:16.609 "read": true, 00:11:16.609 "write": true, 00:11:16.609 "unmap": true, 00:11:16.609 "flush": true, 00:11:16.609 "reset": true, 00:11:16.609 "nvme_admin": false, 00:11:16.609 "nvme_io": false, 00:11:16.609 "nvme_io_md": false, 00:11:16.609 "write_zeroes": true, 00:11:16.609 "zcopy": true, 00:11:16.609 "get_zone_info": false, 00:11:16.609 "zone_management": false, 00:11:16.609 "zone_append": false, 00:11:16.609 "compare": false, 00:11:16.609 "compare_and_write": false, 00:11:16.609 "abort": true, 00:11:16.609 "seek_hole": false, 00:11:16.609 "seek_data": false, 00:11:16.609 "copy": true, 00:11:16.609 "nvme_iov_md": false 00:11:16.609 }, 00:11:16.609 "memory_domains": [ 00:11:16.609 { 00:11:16.609 "dma_device_id": "system", 00:11:16.609 "dma_device_type": 1 00:11:16.609 }, 00:11:16.609 { 00:11:16.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.609 "dma_device_type": 2 00:11:16.609 } 00:11:16.609 ], 00:11:16.609 "driver_specific": {} 00:11:16.609 } 00:11:16.609 ] 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.609 BaseBdev3 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.609 03:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.609 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.609 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:16.609 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.609 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.609 [ 00:11:16.609 { 00:11:16.609 "name": "BaseBdev3", 00:11:16.609 "aliases": [ 00:11:16.609 "ea159dd9-4b12-4e3e-9f0a-3c6a51a1c8f3" 00:11:16.609 ], 00:11:16.609 "product_name": "Malloc disk", 00:11:16.609 "block_size": 512, 00:11:16.609 "num_blocks": 65536, 00:11:16.609 "uuid": "ea159dd9-4b12-4e3e-9f0a-3c6a51a1c8f3", 00:11:16.610 "assigned_rate_limits": { 00:11:16.610 "rw_ios_per_sec": 0, 00:11:16.610 "rw_mbytes_per_sec": 0, 00:11:16.610 "r_mbytes_per_sec": 0, 00:11:16.610 "w_mbytes_per_sec": 0 00:11:16.610 }, 00:11:16.610 "claimed": false, 00:11:16.610 "zoned": false, 00:11:16.610 "supported_io_types": { 00:11:16.610 "read": true, 00:11:16.610 "write": true, 00:11:16.610 "unmap": true, 00:11:16.610 "flush": true, 00:11:16.610 "reset": true, 00:11:16.610 "nvme_admin": false, 00:11:16.610 "nvme_io": false, 00:11:16.610 "nvme_io_md": false, 00:11:16.610 "write_zeroes": true, 00:11:16.610 "zcopy": true, 00:11:16.610 "get_zone_info": false, 00:11:16.610 "zone_management": false, 00:11:16.610 "zone_append": false, 00:11:16.610 "compare": false, 00:11:16.610 "compare_and_write": false, 00:11:16.610 "abort": true, 00:11:16.610 "seek_hole": false, 00:11:16.610 "seek_data": false, 00:11:16.610 "copy": true, 00:11:16.610 "nvme_iov_md": false 00:11:16.610 }, 00:11:16.610 "memory_domains": [ 00:11:16.610 { 00:11:16.610 "dma_device_id": "system", 00:11:16.610 "dma_device_type": 1 00:11:16.610 }, 00:11:16.610 { 00:11:16.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.610 "dma_device_type": 2 00:11:16.610 } 00:11:16.610 ], 00:11:16.610 "driver_specific": {} 00:11:16.610 } 00:11:16.610 ] 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.610 BaseBdev4 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.610 [ 00:11:16.610 { 00:11:16.610 "name": "BaseBdev4", 00:11:16.610 "aliases": [ 00:11:16.610 "25eb023b-ba5a-4781-b28f-2e9a8a805418" 00:11:16.610 ], 00:11:16.610 "product_name": "Malloc disk", 00:11:16.610 "block_size": 512, 00:11:16.610 "num_blocks": 65536, 00:11:16.610 "uuid": "25eb023b-ba5a-4781-b28f-2e9a8a805418", 00:11:16.610 "assigned_rate_limits": { 00:11:16.610 "rw_ios_per_sec": 0, 00:11:16.610 "rw_mbytes_per_sec": 0, 00:11:16.610 "r_mbytes_per_sec": 0, 00:11:16.610 "w_mbytes_per_sec": 0 00:11:16.610 }, 00:11:16.610 "claimed": false, 00:11:16.610 "zoned": false, 00:11:16.610 "supported_io_types": { 00:11:16.610 "read": true, 00:11:16.610 "write": true, 00:11:16.610 "unmap": true, 00:11:16.610 "flush": true, 00:11:16.610 "reset": true, 00:11:16.610 "nvme_admin": false, 00:11:16.610 "nvme_io": false, 00:11:16.610 "nvme_io_md": false, 00:11:16.610 "write_zeroes": true, 00:11:16.610 "zcopy": true, 00:11:16.610 "get_zone_info": false, 00:11:16.610 "zone_management": false, 00:11:16.610 "zone_append": false, 00:11:16.610 "compare": false, 00:11:16.610 "compare_and_write": false, 00:11:16.610 "abort": true, 00:11:16.610 "seek_hole": false, 00:11:16.610 "seek_data": false, 00:11:16.610 "copy": true, 00:11:16.610 "nvme_iov_md": false 00:11:16.610 }, 00:11:16.610 "memory_domains": [ 00:11:16.610 { 00:11:16.610 "dma_device_id": "system", 00:11:16.610 "dma_device_type": 1 00:11:16.610 }, 00:11:16.610 { 00:11:16.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.610 "dma_device_type": 2 00:11:16.610 } 00:11:16.610 ], 00:11:16.610 "driver_specific": {} 00:11:16.610 } 00:11:16.610 ] 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.610 [2024-11-21 03:20:04.083299] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:16.610 [2024-11-21 03:20:04.083363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:16.610 [2024-11-21 03:20:04.083388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:16.610 [2024-11-21 03:20:04.085369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:16.610 [2024-11-21 03:20:04.085422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.610 "name": "Existed_Raid", 00:11:16.610 "uuid": "2172749a-dd8c-4f74-9866-c012fd8a7554", 00:11:16.610 "strip_size_kb": 64, 00:11:16.610 "state": "configuring", 00:11:16.610 "raid_level": "concat", 00:11:16.610 "superblock": true, 00:11:16.610 "num_base_bdevs": 4, 00:11:16.610 "num_base_bdevs_discovered": 3, 00:11:16.610 "num_base_bdevs_operational": 4, 00:11:16.610 "base_bdevs_list": [ 00:11:16.610 { 00:11:16.610 "name": "BaseBdev1", 00:11:16.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.610 "is_configured": false, 00:11:16.610 "data_offset": 0, 00:11:16.610 "data_size": 0 00:11:16.610 }, 00:11:16.610 { 00:11:16.610 "name": "BaseBdev2", 00:11:16.610 "uuid": "f8568ac4-4021-4a76-afa0-72e2eb1b881f", 00:11:16.610 "is_configured": true, 00:11:16.610 "data_offset": 2048, 00:11:16.610 "data_size": 63488 00:11:16.610 }, 00:11:16.610 { 00:11:16.610 "name": "BaseBdev3", 00:11:16.610 "uuid": "ea159dd9-4b12-4e3e-9f0a-3c6a51a1c8f3", 00:11:16.610 "is_configured": true, 00:11:16.610 "data_offset": 2048, 00:11:16.610 "data_size": 63488 00:11:16.610 }, 00:11:16.610 { 00:11:16.610 "name": "BaseBdev4", 00:11:16.610 "uuid": "25eb023b-ba5a-4781-b28f-2e9a8a805418", 00:11:16.610 "is_configured": true, 00:11:16.610 "data_offset": 2048, 00:11:16.610 "data_size": 63488 00:11:16.610 } 00:11:16.610 ] 00:11:16.610 }' 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.610 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.179 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:17.179 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.179 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.179 [2024-11-21 03:20:04.535381] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:17.179 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.179 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:17.179 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.179 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.179 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.179 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.179 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.179 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.179 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.179 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.179 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.179 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.179 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.179 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.179 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.179 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.179 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.179 "name": "Existed_Raid", 00:11:17.179 "uuid": "2172749a-dd8c-4f74-9866-c012fd8a7554", 00:11:17.179 "strip_size_kb": 64, 00:11:17.179 "state": "configuring", 00:11:17.179 "raid_level": "concat", 00:11:17.179 "superblock": true, 00:11:17.179 "num_base_bdevs": 4, 00:11:17.179 "num_base_bdevs_discovered": 2, 00:11:17.179 "num_base_bdevs_operational": 4, 00:11:17.179 "base_bdevs_list": [ 00:11:17.179 { 00:11:17.179 "name": "BaseBdev1", 00:11:17.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.179 "is_configured": false, 00:11:17.179 "data_offset": 0, 00:11:17.179 "data_size": 0 00:11:17.179 }, 00:11:17.179 { 00:11:17.179 "name": null, 00:11:17.179 "uuid": "f8568ac4-4021-4a76-afa0-72e2eb1b881f", 00:11:17.179 "is_configured": false, 00:11:17.179 "data_offset": 0, 00:11:17.179 "data_size": 63488 00:11:17.179 }, 00:11:17.179 { 00:11:17.179 "name": "BaseBdev3", 00:11:17.179 "uuid": "ea159dd9-4b12-4e3e-9f0a-3c6a51a1c8f3", 00:11:17.179 "is_configured": true, 00:11:17.179 "data_offset": 2048, 00:11:17.179 "data_size": 63488 00:11:17.179 }, 00:11:17.179 { 00:11:17.179 "name": "BaseBdev4", 00:11:17.179 "uuid": "25eb023b-ba5a-4781-b28f-2e9a8a805418", 00:11:17.179 "is_configured": true, 00:11:17.179 "data_offset": 2048, 00:11:17.179 "data_size": 63488 00:11:17.179 } 00:11:17.179 ] 00:11:17.179 }' 00:11:17.179 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.179 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.439 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.439 03:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:17.439 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.439 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.439 03:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.698 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:17.698 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:17.698 03:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.698 03:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.698 [2024-11-21 03:20:05.030688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:17.698 BaseBdev1 00:11:17.698 03:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.698 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:17.698 03:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:17.698 03:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:17.698 03:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:17.698 03:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:17.698 03:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:17.698 03:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:17.698 03:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.698 03:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.698 03:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.698 03:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:17.698 03:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.698 03:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.698 [ 00:11:17.698 { 00:11:17.698 "name": "BaseBdev1", 00:11:17.698 "aliases": [ 00:11:17.698 "f60f47ca-2714-4cba-a02c-b424896496cf" 00:11:17.698 ], 00:11:17.698 "product_name": "Malloc disk", 00:11:17.698 "block_size": 512, 00:11:17.698 "num_blocks": 65536, 00:11:17.698 "uuid": "f60f47ca-2714-4cba-a02c-b424896496cf", 00:11:17.698 "assigned_rate_limits": { 00:11:17.698 "rw_ios_per_sec": 0, 00:11:17.698 "rw_mbytes_per_sec": 0, 00:11:17.698 "r_mbytes_per_sec": 0, 00:11:17.698 "w_mbytes_per_sec": 0 00:11:17.698 }, 00:11:17.698 "claimed": true, 00:11:17.698 "claim_type": "exclusive_write", 00:11:17.698 "zoned": false, 00:11:17.698 "supported_io_types": { 00:11:17.698 "read": true, 00:11:17.698 "write": true, 00:11:17.698 "unmap": true, 00:11:17.698 "flush": true, 00:11:17.698 "reset": true, 00:11:17.698 "nvme_admin": false, 00:11:17.698 "nvme_io": false, 00:11:17.698 "nvme_io_md": false, 00:11:17.698 "write_zeroes": true, 00:11:17.698 "zcopy": true, 00:11:17.698 "get_zone_info": false, 00:11:17.698 "zone_management": false, 00:11:17.698 "zone_append": false, 00:11:17.698 "compare": false, 00:11:17.698 "compare_and_write": false, 00:11:17.698 "abort": true, 00:11:17.699 "seek_hole": false, 00:11:17.699 "seek_data": false, 00:11:17.699 "copy": true, 00:11:17.699 "nvme_iov_md": false 00:11:17.699 }, 00:11:17.699 "memory_domains": [ 00:11:17.699 { 00:11:17.699 "dma_device_id": "system", 00:11:17.699 "dma_device_type": 1 00:11:17.699 }, 00:11:17.699 { 00:11:17.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.699 "dma_device_type": 2 00:11:17.699 } 00:11:17.699 ], 00:11:17.699 "driver_specific": {} 00:11:17.699 } 00:11:17.699 ] 00:11:17.699 03:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.699 03:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:17.699 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:17.699 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.699 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.699 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.699 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.699 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.699 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.699 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.699 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.699 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.699 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.699 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.699 03:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.699 03:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.699 03:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.699 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.699 "name": "Existed_Raid", 00:11:17.699 "uuid": "2172749a-dd8c-4f74-9866-c012fd8a7554", 00:11:17.699 "strip_size_kb": 64, 00:11:17.699 "state": "configuring", 00:11:17.699 "raid_level": "concat", 00:11:17.699 "superblock": true, 00:11:17.699 "num_base_bdevs": 4, 00:11:17.699 "num_base_bdevs_discovered": 3, 00:11:17.699 "num_base_bdevs_operational": 4, 00:11:17.699 "base_bdevs_list": [ 00:11:17.699 { 00:11:17.699 "name": "BaseBdev1", 00:11:17.699 "uuid": "f60f47ca-2714-4cba-a02c-b424896496cf", 00:11:17.699 "is_configured": true, 00:11:17.699 "data_offset": 2048, 00:11:17.699 "data_size": 63488 00:11:17.699 }, 00:11:17.699 { 00:11:17.699 "name": null, 00:11:17.699 "uuid": "f8568ac4-4021-4a76-afa0-72e2eb1b881f", 00:11:17.699 "is_configured": false, 00:11:17.699 "data_offset": 0, 00:11:17.699 "data_size": 63488 00:11:17.699 }, 00:11:17.699 { 00:11:17.699 "name": "BaseBdev3", 00:11:17.699 "uuid": "ea159dd9-4b12-4e3e-9f0a-3c6a51a1c8f3", 00:11:17.699 "is_configured": true, 00:11:17.699 "data_offset": 2048, 00:11:17.699 "data_size": 63488 00:11:17.699 }, 00:11:17.699 { 00:11:17.699 "name": "BaseBdev4", 00:11:17.699 "uuid": "25eb023b-ba5a-4781-b28f-2e9a8a805418", 00:11:17.699 "is_configured": true, 00:11:17.699 "data_offset": 2048, 00:11:17.699 "data_size": 63488 00:11:17.699 } 00:11:17.699 ] 00:11:17.699 }' 00:11:17.699 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.699 03:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.958 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.958 03:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.959 03:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.959 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:17.959 03:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.959 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:17.959 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:17.959 03:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.959 03:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.959 [2024-11-21 03:20:05.494905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:17.959 03:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.959 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:17.959 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.959 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.959 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.959 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.959 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.959 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.959 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.959 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.959 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.959 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.959 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.959 03:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.959 03:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.218 03:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.218 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.218 "name": "Existed_Raid", 00:11:18.218 "uuid": "2172749a-dd8c-4f74-9866-c012fd8a7554", 00:11:18.218 "strip_size_kb": 64, 00:11:18.218 "state": "configuring", 00:11:18.218 "raid_level": "concat", 00:11:18.218 "superblock": true, 00:11:18.218 "num_base_bdevs": 4, 00:11:18.218 "num_base_bdevs_discovered": 2, 00:11:18.218 "num_base_bdevs_operational": 4, 00:11:18.218 "base_bdevs_list": [ 00:11:18.218 { 00:11:18.218 "name": "BaseBdev1", 00:11:18.218 "uuid": "f60f47ca-2714-4cba-a02c-b424896496cf", 00:11:18.218 "is_configured": true, 00:11:18.218 "data_offset": 2048, 00:11:18.218 "data_size": 63488 00:11:18.218 }, 00:11:18.218 { 00:11:18.218 "name": null, 00:11:18.218 "uuid": "f8568ac4-4021-4a76-afa0-72e2eb1b881f", 00:11:18.218 "is_configured": false, 00:11:18.218 "data_offset": 0, 00:11:18.218 "data_size": 63488 00:11:18.218 }, 00:11:18.218 { 00:11:18.218 "name": null, 00:11:18.218 "uuid": "ea159dd9-4b12-4e3e-9f0a-3c6a51a1c8f3", 00:11:18.218 "is_configured": false, 00:11:18.218 "data_offset": 0, 00:11:18.218 "data_size": 63488 00:11:18.218 }, 00:11:18.218 { 00:11:18.219 "name": "BaseBdev4", 00:11:18.219 "uuid": "25eb023b-ba5a-4781-b28f-2e9a8a805418", 00:11:18.219 "is_configured": true, 00:11:18.219 "data_offset": 2048, 00:11:18.219 "data_size": 63488 00:11:18.219 } 00:11:18.219 ] 00:11:18.219 }' 00:11:18.219 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.219 03:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.479 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.479 03:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:18.479 03:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.479 03:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.479 03:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.479 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:18.479 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:18.479 03:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.479 03:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.479 [2024-11-21 03:20:06.011153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:18.479 03:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.479 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:18.479 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.479 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.479 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:18.479 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.479 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.479 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.479 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.479 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.479 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.479 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.479 03:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.479 03:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.479 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.479 03:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.739 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.739 "name": "Existed_Raid", 00:11:18.739 "uuid": "2172749a-dd8c-4f74-9866-c012fd8a7554", 00:11:18.739 "strip_size_kb": 64, 00:11:18.739 "state": "configuring", 00:11:18.739 "raid_level": "concat", 00:11:18.739 "superblock": true, 00:11:18.739 "num_base_bdevs": 4, 00:11:18.739 "num_base_bdevs_discovered": 3, 00:11:18.739 "num_base_bdevs_operational": 4, 00:11:18.739 "base_bdevs_list": [ 00:11:18.739 { 00:11:18.739 "name": "BaseBdev1", 00:11:18.739 "uuid": "f60f47ca-2714-4cba-a02c-b424896496cf", 00:11:18.739 "is_configured": true, 00:11:18.739 "data_offset": 2048, 00:11:18.739 "data_size": 63488 00:11:18.739 }, 00:11:18.739 { 00:11:18.739 "name": null, 00:11:18.739 "uuid": "f8568ac4-4021-4a76-afa0-72e2eb1b881f", 00:11:18.739 "is_configured": false, 00:11:18.739 "data_offset": 0, 00:11:18.739 "data_size": 63488 00:11:18.739 }, 00:11:18.739 { 00:11:18.739 "name": "BaseBdev3", 00:11:18.739 "uuid": "ea159dd9-4b12-4e3e-9f0a-3c6a51a1c8f3", 00:11:18.739 "is_configured": true, 00:11:18.739 "data_offset": 2048, 00:11:18.739 "data_size": 63488 00:11:18.739 }, 00:11:18.739 { 00:11:18.739 "name": "BaseBdev4", 00:11:18.739 "uuid": "25eb023b-ba5a-4781-b28f-2e9a8a805418", 00:11:18.739 "is_configured": true, 00:11:18.739 "data_offset": 2048, 00:11:18.739 "data_size": 63488 00:11:18.739 } 00:11:18.739 ] 00:11:18.739 }' 00:11:18.739 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.739 03:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.998 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.998 03:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.998 03:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.998 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:18.998 03:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.998 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:18.998 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:18.998 03:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.999 03:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.999 [2024-11-21 03:20:06.507261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:18.999 03:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.999 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:18.999 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.999 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.999 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:18.999 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.999 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.999 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.999 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.999 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.999 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.999 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.999 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.999 03:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.999 03:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.999 03:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.258 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.258 "name": "Existed_Raid", 00:11:19.258 "uuid": "2172749a-dd8c-4f74-9866-c012fd8a7554", 00:11:19.258 "strip_size_kb": 64, 00:11:19.258 "state": "configuring", 00:11:19.258 "raid_level": "concat", 00:11:19.258 "superblock": true, 00:11:19.258 "num_base_bdevs": 4, 00:11:19.258 "num_base_bdevs_discovered": 2, 00:11:19.258 "num_base_bdevs_operational": 4, 00:11:19.258 "base_bdevs_list": [ 00:11:19.258 { 00:11:19.258 "name": null, 00:11:19.258 "uuid": "f60f47ca-2714-4cba-a02c-b424896496cf", 00:11:19.258 "is_configured": false, 00:11:19.258 "data_offset": 0, 00:11:19.258 "data_size": 63488 00:11:19.258 }, 00:11:19.258 { 00:11:19.258 "name": null, 00:11:19.258 "uuid": "f8568ac4-4021-4a76-afa0-72e2eb1b881f", 00:11:19.258 "is_configured": false, 00:11:19.258 "data_offset": 0, 00:11:19.258 "data_size": 63488 00:11:19.258 }, 00:11:19.258 { 00:11:19.258 "name": "BaseBdev3", 00:11:19.258 "uuid": "ea159dd9-4b12-4e3e-9f0a-3c6a51a1c8f3", 00:11:19.258 "is_configured": true, 00:11:19.258 "data_offset": 2048, 00:11:19.258 "data_size": 63488 00:11:19.258 }, 00:11:19.258 { 00:11:19.258 "name": "BaseBdev4", 00:11:19.258 "uuid": "25eb023b-ba5a-4781-b28f-2e9a8a805418", 00:11:19.258 "is_configured": true, 00:11:19.258 "data_offset": 2048, 00:11:19.258 "data_size": 63488 00:11:19.258 } 00:11:19.258 ] 00:11:19.258 }' 00:11:19.259 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.259 03:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.520 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.520 03:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.520 03:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.520 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:19.520 03:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.520 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:19.520 03:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:19.520 03:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.520 03:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.520 [2024-11-21 03:20:07.005935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:19.520 03:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.520 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:19.520 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.520 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.520 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:19.520 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.520 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.520 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.520 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.520 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.520 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.520 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.520 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.520 03:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.520 03:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.520 03:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.520 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.520 "name": "Existed_Raid", 00:11:19.520 "uuid": "2172749a-dd8c-4f74-9866-c012fd8a7554", 00:11:19.520 "strip_size_kb": 64, 00:11:19.520 "state": "configuring", 00:11:19.520 "raid_level": "concat", 00:11:19.520 "superblock": true, 00:11:19.520 "num_base_bdevs": 4, 00:11:19.520 "num_base_bdevs_discovered": 3, 00:11:19.520 "num_base_bdevs_operational": 4, 00:11:19.520 "base_bdevs_list": [ 00:11:19.520 { 00:11:19.520 "name": null, 00:11:19.520 "uuid": "f60f47ca-2714-4cba-a02c-b424896496cf", 00:11:19.520 "is_configured": false, 00:11:19.520 "data_offset": 0, 00:11:19.520 "data_size": 63488 00:11:19.520 }, 00:11:19.520 { 00:11:19.520 "name": "BaseBdev2", 00:11:19.520 "uuid": "f8568ac4-4021-4a76-afa0-72e2eb1b881f", 00:11:19.520 "is_configured": true, 00:11:19.520 "data_offset": 2048, 00:11:19.520 "data_size": 63488 00:11:19.520 }, 00:11:19.520 { 00:11:19.520 "name": "BaseBdev3", 00:11:19.520 "uuid": "ea159dd9-4b12-4e3e-9f0a-3c6a51a1c8f3", 00:11:19.520 "is_configured": true, 00:11:19.520 "data_offset": 2048, 00:11:19.520 "data_size": 63488 00:11:19.520 }, 00:11:19.520 { 00:11:19.520 "name": "BaseBdev4", 00:11:19.520 "uuid": "25eb023b-ba5a-4781-b28f-2e9a8a805418", 00:11:19.520 "is_configured": true, 00:11:19.520 "data_offset": 2048, 00:11:19.520 "data_size": 63488 00:11:19.520 } 00:11:19.520 ] 00:11:19.520 }' 00:11:19.520 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.520 03:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.103 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.103 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:20.103 03:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.103 03:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.103 03:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.103 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:20.103 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.103 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:20.103 03:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.103 03:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.103 03:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.103 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f60f47ca-2714-4cba-a02c-b424896496cf 00:11:20.103 03:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.103 03:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.103 [2024-11-21 03:20:07.565382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:20.103 [2024-11-21 03:20:07.565578] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:20.103 [2024-11-21 03:20:07.565602] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:20.103 [2024-11-21 03:20:07.565866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:11:20.104 NewBaseBdev 00:11:20.104 [2024-11-21 03:20:07.565990] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:20.104 [2024-11-21 03:20:07.566001] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:20.104 [2024-11-21 03:20:07.566121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.104 03:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.104 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:20.104 03:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:20.104 03:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:20.104 03:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:20.104 03:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:20.104 03:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:20.104 03:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:20.104 03:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.104 03:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.104 03:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.104 03:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:20.104 03:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.104 03:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.104 [ 00:11:20.104 { 00:11:20.104 "name": "NewBaseBdev", 00:11:20.104 "aliases": [ 00:11:20.104 "f60f47ca-2714-4cba-a02c-b424896496cf" 00:11:20.104 ], 00:11:20.104 "product_name": "Malloc disk", 00:11:20.104 "block_size": 512, 00:11:20.104 "num_blocks": 65536, 00:11:20.104 "uuid": "f60f47ca-2714-4cba-a02c-b424896496cf", 00:11:20.104 "assigned_rate_limits": { 00:11:20.104 "rw_ios_per_sec": 0, 00:11:20.104 "rw_mbytes_per_sec": 0, 00:11:20.104 "r_mbytes_per_sec": 0, 00:11:20.104 "w_mbytes_per_sec": 0 00:11:20.104 }, 00:11:20.104 "claimed": true, 00:11:20.104 "claim_type": "exclusive_write", 00:11:20.104 "zoned": false, 00:11:20.104 "supported_io_types": { 00:11:20.104 "read": true, 00:11:20.104 "write": true, 00:11:20.104 "unmap": true, 00:11:20.104 "flush": true, 00:11:20.104 "reset": true, 00:11:20.104 "nvme_admin": false, 00:11:20.104 "nvme_io": false, 00:11:20.104 "nvme_io_md": false, 00:11:20.104 "write_zeroes": true, 00:11:20.104 "zcopy": true, 00:11:20.104 "get_zone_info": false, 00:11:20.104 "zone_management": false, 00:11:20.104 "zone_append": false, 00:11:20.104 "compare": false, 00:11:20.104 "compare_and_write": false, 00:11:20.104 "abort": true, 00:11:20.104 "seek_hole": false, 00:11:20.104 "seek_data": false, 00:11:20.104 "copy": true, 00:11:20.104 "nvme_iov_md": false 00:11:20.104 }, 00:11:20.104 "memory_domains": [ 00:11:20.104 { 00:11:20.104 "dma_device_id": "system", 00:11:20.104 "dma_device_type": 1 00:11:20.104 }, 00:11:20.104 { 00:11:20.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.104 "dma_device_type": 2 00:11:20.104 } 00:11:20.104 ], 00:11:20.104 "driver_specific": {} 00:11:20.104 } 00:11:20.104 ] 00:11:20.104 03:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.104 03:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:20.104 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:20.104 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.104 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:20.104 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.104 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.104 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.104 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.104 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.104 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.104 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.104 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.104 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.104 03:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.104 03:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.104 03:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.104 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.104 "name": "Existed_Raid", 00:11:20.104 "uuid": "2172749a-dd8c-4f74-9866-c012fd8a7554", 00:11:20.104 "strip_size_kb": 64, 00:11:20.104 "state": "online", 00:11:20.104 "raid_level": "concat", 00:11:20.104 "superblock": true, 00:11:20.104 "num_base_bdevs": 4, 00:11:20.104 "num_base_bdevs_discovered": 4, 00:11:20.104 "num_base_bdevs_operational": 4, 00:11:20.104 "base_bdevs_list": [ 00:11:20.104 { 00:11:20.104 "name": "NewBaseBdev", 00:11:20.104 "uuid": "f60f47ca-2714-4cba-a02c-b424896496cf", 00:11:20.104 "is_configured": true, 00:11:20.104 "data_offset": 2048, 00:11:20.104 "data_size": 63488 00:11:20.104 }, 00:11:20.104 { 00:11:20.104 "name": "BaseBdev2", 00:11:20.104 "uuid": "f8568ac4-4021-4a76-afa0-72e2eb1b881f", 00:11:20.104 "is_configured": true, 00:11:20.104 "data_offset": 2048, 00:11:20.104 "data_size": 63488 00:11:20.104 }, 00:11:20.104 { 00:11:20.104 "name": "BaseBdev3", 00:11:20.104 "uuid": "ea159dd9-4b12-4e3e-9f0a-3c6a51a1c8f3", 00:11:20.104 "is_configured": true, 00:11:20.104 "data_offset": 2048, 00:11:20.104 "data_size": 63488 00:11:20.104 }, 00:11:20.104 { 00:11:20.104 "name": "BaseBdev4", 00:11:20.104 "uuid": "25eb023b-ba5a-4781-b28f-2e9a8a805418", 00:11:20.104 "is_configured": true, 00:11:20.104 "data_offset": 2048, 00:11:20.104 "data_size": 63488 00:11:20.104 } 00:11:20.104 ] 00:11:20.104 }' 00:11:20.104 03:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.104 03:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.673 [2024-11-21 03:20:08.041965] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:20.673 "name": "Existed_Raid", 00:11:20.673 "aliases": [ 00:11:20.673 "2172749a-dd8c-4f74-9866-c012fd8a7554" 00:11:20.673 ], 00:11:20.673 "product_name": "Raid Volume", 00:11:20.673 "block_size": 512, 00:11:20.673 "num_blocks": 253952, 00:11:20.673 "uuid": "2172749a-dd8c-4f74-9866-c012fd8a7554", 00:11:20.673 "assigned_rate_limits": { 00:11:20.673 "rw_ios_per_sec": 0, 00:11:20.673 "rw_mbytes_per_sec": 0, 00:11:20.673 "r_mbytes_per_sec": 0, 00:11:20.673 "w_mbytes_per_sec": 0 00:11:20.673 }, 00:11:20.673 "claimed": false, 00:11:20.673 "zoned": false, 00:11:20.673 "supported_io_types": { 00:11:20.673 "read": true, 00:11:20.673 "write": true, 00:11:20.673 "unmap": true, 00:11:20.673 "flush": true, 00:11:20.673 "reset": true, 00:11:20.673 "nvme_admin": false, 00:11:20.673 "nvme_io": false, 00:11:20.673 "nvme_io_md": false, 00:11:20.673 "write_zeroes": true, 00:11:20.673 "zcopy": false, 00:11:20.673 "get_zone_info": false, 00:11:20.673 "zone_management": false, 00:11:20.673 "zone_append": false, 00:11:20.673 "compare": false, 00:11:20.673 "compare_and_write": false, 00:11:20.673 "abort": false, 00:11:20.673 "seek_hole": false, 00:11:20.673 "seek_data": false, 00:11:20.673 "copy": false, 00:11:20.673 "nvme_iov_md": false 00:11:20.673 }, 00:11:20.673 "memory_domains": [ 00:11:20.673 { 00:11:20.673 "dma_device_id": "system", 00:11:20.673 "dma_device_type": 1 00:11:20.673 }, 00:11:20.673 { 00:11:20.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.673 "dma_device_type": 2 00:11:20.673 }, 00:11:20.673 { 00:11:20.673 "dma_device_id": "system", 00:11:20.673 "dma_device_type": 1 00:11:20.673 }, 00:11:20.673 { 00:11:20.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.673 "dma_device_type": 2 00:11:20.673 }, 00:11:20.673 { 00:11:20.673 "dma_device_id": "system", 00:11:20.673 "dma_device_type": 1 00:11:20.673 }, 00:11:20.673 { 00:11:20.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.673 "dma_device_type": 2 00:11:20.673 }, 00:11:20.673 { 00:11:20.673 "dma_device_id": "system", 00:11:20.673 "dma_device_type": 1 00:11:20.673 }, 00:11:20.673 { 00:11:20.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.673 "dma_device_type": 2 00:11:20.673 } 00:11:20.673 ], 00:11:20.673 "driver_specific": { 00:11:20.673 "raid": { 00:11:20.673 "uuid": "2172749a-dd8c-4f74-9866-c012fd8a7554", 00:11:20.673 "strip_size_kb": 64, 00:11:20.673 "state": "online", 00:11:20.673 "raid_level": "concat", 00:11:20.673 "superblock": true, 00:11:20.673 "num_base_bdevs": 4, 00:11:20.673 "num_base_bdevs_discovered": 4, 00:11:20.673 "num_base_bdevs_operational": 4, 00:11:20.673 "base_bdevs_list": [ 00:11:20.673 { 00:11:20.673 "name": "NewBaseBdev", 00:11:20.673 "uuid": "f60f47ca-2714-4cba-a02c-b424896496cf", 00:11:20.673 "is_configured": true, 00:11:20.673 "data_offset": 2048, 00:11:20.673 "data_size": 63488 00:11:20.673 }, 00:11:20.673 { 00:11:20.673 "name": "BaseBdev2", 00:11:20.673 "uuid": "f8568ac4-4021-4a76-afa0-72e2eb1b881f", 00:11:20.673 "is_configured": true, 00:11:20.673 "data_offset": 2048, 00:11:20.673 "data_size": 63488 00:11:20.673 }, 00:11:20.673 { 00:11:20.673 "name": "BaseBdev3", 00:11:20.673 "uuid": "ea159dd9-4b12-4e3e-9f0a-3c6a51a1c8f3", 00:11:20.673 "is_configured": true, 00:11:20.673 "data_offset": 2048, 00:11:20.673 "data_size": 63488 00:11:20.673 }, 00:11:20.673 { 00:11:20.673 "name": "BaseBdev4", 00:11:20.673 "uuid": "25eb023b-ba5a-4781-b28f-2e9a8a805418", 00:11:20.673 "is_configured": true, 00:11:20.673 "data_offset": 2048, 00:11:20.673 "data_size": 63488 00:11:20.673 } 00:11:20.673 ] 00:11:20.673 } 00:11:20.673 } 00:11:20.673 }' 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:20.673 BaseBdev2 00:11:20.673 BaseBdev3 00:11:20.673 BaseBdev4' 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.673 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.932 03:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.932 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.933 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.933 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.933 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:20.933 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.933 03:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.933 03:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.933 03:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.933 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.933 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.933 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:20.933 03:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.933 03:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.933 [2024-11-21 03:20:08.329657] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:20.933 [2024-11-21 03:20:08.329699] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:20.933 [2024-11-21 03:20:08.329788] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:20.933 [2024-11-21 03:20:08.329856] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:20.933 [2024-11-21 03:20:08.329889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:20.933 03:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.933 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84867 00:11:20.933 03:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84867 ']' 00:11:20.933 03:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 84867 00:11:20.933 03:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:20.933 03:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:20.933 03:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84867 00:11:20.933 03:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:20.933 03:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:20.933 killing process with pid 84867 00:11:20.933 03:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84867' 00:11:20.933 03:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 84867 00:11:20.933 [2024-11-21 03:20:08.376182] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:20.933 03:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 84867 00:11:20.933 [2024-11-21 03:20:08.418198] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:21.192 03:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:21.192 00:11:21.192 real 0m9.754s 00:11:21.192 user 0m16.617s 00:11:21.192 sys 0m2.107s 00:11:21.192 03:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.192 03:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.192 ************************************ 00:11:21.192 END TEST raid_state_function_test_sb 00:11:21.192 ************************************ 00:11:21.192 03:20:08 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:21.192 03:20:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:21.192 03:20:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.192 03:20:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:21.192 ************************************ 00:11:21.192 START TEST raid_superblock_test 00:11:21.192 ************************************ 00:11:21.192 03:20:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:11:21.192 03:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:21.192 03:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:21.192 03:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:21.192 03:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:21.192 03:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:21.192 03:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:21.192 03:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:21.192 03:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:21.192 03:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:21.192 03:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:21.192 03:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:21.192 03:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:21.192 03:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:21.192 03:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:21.192 03:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:21.192 03:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:21.192 03:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=85521 00:11:21.192 03:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:21.192 03:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 85521 00:11:21.192 03:20:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 85521 ']' 00:11:21.192 03:20:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.192 03:20:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.192 03:20:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.192 03:20:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.192 03:20:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.452 [2024-11-21 03:20:08.807190] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:11:21.452 [2024-11-21 03:20:08.807328] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85521 ] 00:11:21.452 [2024-11-21 03:20:08.946476] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:21.452 [2024-11-21 03:20:08.984516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.452 [2024-11-21 03:20:09.015050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.712 [2024-11-21 03:20:09.058974] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:21.712 [2024-11-21 03:20:09.059055] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:22.280 03:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:22.280 03:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:22.280 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:22.280 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:22.280 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:22.280 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:22.280 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:22.280 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:22.280 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:22.280 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:22.280 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:22.280 03:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.280 03:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.280 malloc1 00:11:22.280 03:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.280 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:22.280 03:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.280 03:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.281 [2024-11-21 03:20:09.699339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:22.281 [2024-11-21 03:20:09.699415] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.281 [2024-11-21 03:20:09.699446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:22.281 [2024-11-21 03:20:09.699459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.281 [2024-11-21 03:20:09.701670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.281 [2024-11-21 03:20:09.701708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:22.281 pt1 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.281 malloc2 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.281 [2024-11-21 03:20:09.728255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:22.281 [2024-11-21 03:20:09.728320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.281 [2024-11-21 03:20:09.728342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:22.281 [2024-11-21 03:20:09.728352] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.281 [2024-11-21 03:20:09.730469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.281 [2024-11-21 03:20:09.730507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:22.281 pt2 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.281 malloc3 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.281 [2024-11-21 03:20:09.757219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:22.281 [2024-11-21 03:20:09.757279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.281 [2024-11-21 03:20:09.757302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:22.281 [2024-11-21 03:20:09.757324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.281 [2024-11-21 03:20:09.759393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.281 [2024-11-21 03:20:09.759428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:22.281 pt3 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.281 malloc4 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.281 [2024-11-21 03:20:09.795598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:22.281 [2024-11-21 03:20:09.795664] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.281 [2024-11-21 03:20:09.795691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:22.281 [2024-11-21 03:20:09.795703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.281 [2024-11-21 03:20:09.797973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.281 [2024-11-21 03:20:09.798012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:22.281 pt4 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.281 [2024-11-21 03:20:09.807662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:22.281 [2024-11-21 03:20:09.809528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:22.281 [2024-11-21 03:20:09.809605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:22.281 [2024-11-21 03:20:09.809668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:22.281 [2024-11-21 03:20:09.809831] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:11:22.281 [2024-11-21 03:20:09.809848] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:22.281 [2024-11-21 03:20:09.810134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:11:22.281 [2024-11-21 03:20:09.810297] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:11:22.281 [2024-11-21 03:20:09.810315] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:11:22.281 [2024-11-21 03:20:09.810446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.281 03:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.282 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:22.282 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:22.282 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.282 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.282 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.282 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.282 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.282 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.282 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.282 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.282 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.282 03:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.282 03:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.282 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.282 03:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.542 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.542 "name": "raid_bdev1", 00:11:22.542 "uuid": "8135e097-597a-4964-8f34-b0226d1672d6", 00:11:22.542 "strip_size_kb": 64, 00:11:22.542 "state": "online", 00:11:22.542 "raid_level": "concat", 00:11:22.542 "superblock": true, 00:11:22.542 "num_base_bdevs": 4, 00:11:22.542 "num_base_bdevs_discovered": 4, 00:11:22.542 "num_base_bdevs_operational": 4, 00:11:22.542 "base_bdevs_list": [ 00:11:22.542 { 00:11:22.542 "name": "pt1", 00:11:22.542 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:22.542 "is_configured": true, 00:11:22.542 "data_offset": 2048, 00:11:22.542 "data_size": 63488 00:11:22.542 }, 00:11:22.542 { 00:11:22.542 "name": "pt2", 00:11:22.542 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:22.542 "is_configured": true, 00:11:22.542 "data_offset": 2048, 00:11:22.542 "data_size": 63488 00:11:22.542 }, 00:11:22.542 { 00:11:22.542 "name": "pt3", 00:11:22.542 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:22.542 "is_configured": true, 00:11:22.542 "data_offset": 2048, 00:11:22.542 "data_size": 63488 00:11:22.542 }, 00:11:22.542 { 00:11:22.542 "name": "pt4", 00:11:22.542 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:22.542 "is_configured": true, 00:11:22.542 "data_offset": 2048, 00:11:22.542 "data_size": 63488 00:11:22.542 } 00:11:22.542 ] 00:11:22.542 }' 00:11:22.542 03:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.542 03:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.801 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:22.801 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:22.801 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:22.801 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:22.801 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:22.801 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:22.801 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:22.801 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:22.801 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.801 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.801 [2024-11-21 03:20:10.264197] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:22.801 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.801 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:22.801 "name": "raid_bdev1", 00:11:22.801 "aliases": [ 00:11:22.801 "8135e097-597a-4964-8f34-b0226d1672d6" 00:11:22.801 ], 00:11:22.801 "product_name": "Raid Volume", 00:11:22.801 "block_size": 512, 00:11:22.801 "num_blocks": 253952, 00:11:22.801 "uuid": "8135e097-597a-4964-8f34-b0226d1672d6", 00:11:22.801 "assigned_rate_limits": { 00:11:22.801 "rw_ios_per_sec": 0, 00:11:22.801 "rw_mbytes_per_sec": 0, 00:11:22.801 "r_mbytes_per_sec": 0, 00:11:22.801 "w_mbytes_per_sec": 0 00:11:22.801 }, 00:11:22.801 "claimed": false, 00:11:22.801 "zoned": false, 00:11:22.801 "supported_io_types": { 00:11:22.802 "read": true, 00:11:22.802 "write": true, 00:11:22.802 "unmap": true, 00:11:22.802 "flush": true, 00:11:22.802 "reset": true, 00:11:22.802 "nvme_admin": false, 00:11:22.802 "nvme_io": false, 00:11:22.802 "nvme_io_md": false, 00:11:22.802 "write_zeroes": true, 00:11:22.802 "zcopy": false, 00:11:22.802 "get_zone_info": false, 00:11:22.802 "zone_management": false, 00:11:22.802 "zone_append": false, 00:11:22.802 "compare": false, 00:11:22.802 "compare_and_write": false, 00:11:22.802 "abort": false, 00:11:22.802 "seek_hole": false, 00:11:22.802 "seek_data": false, 00:11:22.802 "copy": false, 00:11:22.802 "nvme_iov_md": false 00:11:22.802 }, 00:11:22.802 "memory_domains": [ 00:11:22.802 { 00:11:22.802 "dma_device_id": "system", 00:11:22.802 "dma_device_type": 1 00:11:22.802 }, 00:11:22.802 { 00:11:22.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.802 "dma_device_type": 2 00:11:22.802 }, 00:11:22.802 { 00:11:22.802 "dma_device_id": "system", 00:11:22.802 "dma_device_type": 1 00:11:22.802 }, 00:11:22.802 { 00:11:22.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.802 "dma_device_type": 2 00:11:22.802 }, 00:11:22.802 { 00:11:22.802 "dma_device_id": "system", 00:11:22.802 "dma_device_type": 1 00:11:22.802 }, 00:11:22.802 { 00:11:22.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.802 "dma_device_type": 2 00:11:22.802 }, 00:11:22.802 { 00:11:22.802 "dma_device_id": "system", 00:11:22.802 "dma_device_type": 1 00:11:22.802 }, 00:11:22.802 { 00:11:22.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.802 "dma_device_type": 2 00:11:22.802 } 00:11:22.802 ], 00:11:22.802 "driver_specific": { 00:11:22.802 "raid": { 00:11:22.802 "uuid": "8135e097-597a-4964-8f34-b0226d1672d6", 00:11:22.802 "strip_size_kb": 64, 00:11:22.802 "state": "online", 00:11:22.802 "raid_level": "concat", 00:11:22.802 "superblock": true, 00:11:22.802 "num_base_bdevs": 4, 00:11:22.802 "num_base_bdevs_discovered": 4, 00:11:22.802 "num_base_bdevs_operational": 4, 00:11:22.802 "base_bdevs_list": [ 00:11:22.802 { 00:11:22.802 "name": "pt1", 00:11:22.802 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:22.802 "is_configured": true, 00:11:22.802 "data_offset": 2048, 00:11:22.802 "data_size": 63488 00:11:22.802 }, 00:11:22.802 { 00:11:22.802 "name": "pt2", 00:11:22.802 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:22.802 "is_configured": true, 00:11:22.802 "data_offset": 2048, 00:11:22.802 "data_size": 63488 00:11:22.802 }, 00:11:22.802 { 00:11:22.802 "name": "pt3", 00:11:22.802 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:22.802 "is_configured": true, 00:11:22.802 "data_offset": 2048, 00:11:22.802 "data_size": 63488 00:11:22.802 }, 00:11:22.802 { 00:11:22.802 "name": "pt4", 00:11:22.802 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:22.802 "is_configured": true, 00:11:22.802 "data_offset": 2048, 00:11:22.802 "data_size": 63488 00:11:22.802 } 00:11:22.802 ] 00:11:22.802 } 00:11:22.802 } 00:11:22.802 }' 00:11:22.802 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:22.802 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:22.802 pt2 00:11:22.802 pt3 00:11:22.802 pt4' 00:11:22.802 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.062 [2024-11-21 03:20:10.580199] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8135e097-597a-4964-8f34-b0226d1672d6 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8135e097-597a-4964-8f34-b0226d1672d6 ']' 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.062 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.323 [2024-11-21 03:20:10.627876] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:23.323 [2024-11-21 03:20:10.627922] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:23.323 [2024-11-21 03:20:10.628055] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:23.323 [2024-11-21 03:20:10.628152] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:23.323 [2024-11-21 03:20:10.628175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:23.323 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.324 [2024-11-21 03:20:10.791991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:23.324 [2024-11-21 03:20:10.794007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:23.324 [2024-11-21 03:20:10.794077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:23.324 [2024-11-21 03:20:10.794111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:23.324 [2024-11-21 03:20:10.794161] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:23.324 [2024-11-21 03:20:10.794210] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:23.324 [2024-11-21 03:20:10.794229] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:23.324 [2024-11-21 03:20:10.794247] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:23.324 [2024-11-21 03:20:10.794260] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:23.324 [2024-11-21 03:20:10.794272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:11:23.324 request: 00:11:23.324 { 00:11:23.324 "name": "raid_bdev1", 00:11:23.324 "raid_level": "concat", 00:11:23.324 "base_bdevs": [ 00:11:23.324 "malloc1", 00:11:23.324 "malloc2", 00:11:23.324 "malloc3", 00:11:23.324 "malloc4" 00:11:23.324 ], 00:11:23.324 "strip_size_kb": 64, 00:11:23.324 "superblock": false, 00:11:23.324 "method": "bdev_raid_create", 00:11:23.324 "req_id": 1 00:11:23.324 } 00:11:23.324 Got JSON-RPC error response 00:11:23.324 response: 00:11:23.324 { 00:11:23.324 "code": -17, 00:11:23.324 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:23.324 } 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.324 [2024-11-21 03:20:10.855959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:23.324 [2024-11-21 03:20:10.856054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.324 [2024-11-21 03:20:10.856076] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:23.324 [2024-11-21 03:20:10.856089] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.324 [2024-11-21 03:20:10.858427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.324 [2024-11-21 03:20:10.858474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:23.324 [2024-11-21 03:20:10.858558] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:23.324 [2024-11-21 03:20:10.858618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:23.324 pt1 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.324 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.583 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.583 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.583 "name": "raid_bdev1", 00:11:23.583 "uuid": "8135e097-597a-4964-8f34-b0226d1672d6", 00:11:23.583 "strip_size_kb": 64, 00:11:23.583 "state": "configuring", 00:11:23.583 "raid_level": "concat", 00:11:23.583 "superblock": true, 00:11:23.583 "num_base_bdevs": 4, 00:11:23.583 "num_base_bdevs_discovered": 1, 00:11:23.583 "num_base_bdevs_operational": 4, 00:11:23.583 "base_bdevs_list": [ 00:11:23.583 { 00:11:23.583 "name": "pt1", 00:11:23.583 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:23.583 "is_configured": true, 00:11:23.583 "data_offset": 2048, 00:11:23.583 "data_size": 63488 00:11:23.583 }, 00:11:23.583 { 00:11:23.583 "name": null, 00:11:23.583 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:23.583 "is_configured": false, 00:11:23.583 "data_offset": 2048, 00:11:23.583 "data_size": 63488 00:11:23.583 }, 00:11:23.583 { 00:11:23.583 "name": null, 00:11:23.583 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:23.583 "is_configured": false, 00:11:23.583 "data_offset": 2048, 00:11:23.584 "data_size": 63488 00:11:23.584 }, 00:11:23.584 { 00:11:23.584 "name": null, 00:11:23.584 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:23.584 "is_configured": false, 00:11:23.584 "data_offset": 2048, 00:11:23.584 "data_size": 63488 00:11:23.584 } 00:11:23.584 ] 00:11:23.584 }' 00:11:23.584 03:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.584 03:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.843 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:23.843 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:23.843 03:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.843 03:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.843 [2024-11-21 03:20:11.304107] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:23.843 [2024-11-21 03:20:11.304190] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.843 [2024-11-21 03:20:11.304213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:23.843 [2024-11-21 03:20:11.304226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.843 [2024-11-21 03:20:11.304662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.843 [2024-11-21 03:20:11.304691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:23.843 [2024-11-21 03:20:11.304765] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:23.843 [2024-11-21 03:20:11.304794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:23.843 pt2 00:11:23.843 03:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.843 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:23.843 03:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.843 03:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.843 [2024-11-21 03:20:11.316071] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:23.843 03:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.843 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:23.843 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:23.843 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.843 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.843 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.843 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.843 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.843 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.843 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.843 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.843 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.843 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.843 03:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.843 03:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.843 03:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.843 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.843 "name": "raid_bdev1", 00:11:23.843 "uuid": "8135e097-597a-4964-8f34-b0226d1672d6", 00:11:23.843 "strip_size_kb": 64, 00:11:23.843 "state": "configuring", 00:11:23.843 "raid_level": "concat", 00:11:23.843 "superblock": true, 00:11:23.843 "num_base_bdevs": 4, 00:11:23.843 "num_base_bdevs_discovered": 1, 00:11:23.843 "num_base_bdevs_operational": 4, 00:11:23.843 "base_bdevs_list": [ 00:11:23.843 { 00:11:23.843 "name": "pt1", 00:11:23.843 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:23.843 "is_configured": true, 00:11:23.843 "data_offset": 2048, 00:11:23.843 "data_size": 63488 00:11:23.843 }, 00:11:23.843 { 00:11:23.843 "name": null, 00:11:23.843 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:23.843 "is_configured": false, 00:11:23.843 "data_offset": 0, 00:11:23.843 "data_size": 63488 00:11:23.843 }, 00:11:23.843 { 00:11:23.843 "name": null, 00:11:23.843 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:23.843 "is_configured": false, 00:11:23.843 "data_offset": 2048, 00:11:23.843 "data_size": 63488 00:11:23.843 }, 00:11:23.843 { 00:11:23.843 "name": null, 00:11:23.843 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:23.843 "is_configured": false, 00:11:23.843 "data_offset": 2048, 00:11:23.843 "data_size": 63488 00:11:23.843 } 00:11:23.843 ] 00:11:23.843 }' 00:11:23.843 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.843 03:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.412 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:24.412 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:24.412 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:24.412 03:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.412 03:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.412 [2024-11-21 03:20:11.772231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:24.412 [2024-11-21 03:20:11.772312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.412 [2024-11-21 03:20:11.772334] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:24.412 [2024-11-21 03:20:11.772344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.412 [2024-11-21 03:20:11.772766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.412 [2024-11-21 03:20:11.772794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:24.412 [2024-11-21 03:20:11.772874] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:24.412 [2024-11-21 03:20:11.772896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:24.412 pt2 00:11:24.412 03:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.412 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:24.412 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:24.412 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:24.412 03:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.412 03:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.412 [2024-11-21 03:20:11.780203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:24.412 [2024-11-21 03:20:11.780272] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.412 [2024-11-21 03:20:11.780291] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:24.412 [2024-11-21 03:20:11.780300] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.412 [2024-11-21 03:20:11.780666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.412 [2024-11-21 03:20:11.780693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:24.413 [2024-11-21 03:20:11.780759] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:24.413 [2024-11-21 03:20:11.780777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:24.413 pt3 00:11:24.413 03:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.413 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:24.413 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:24.413 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:24.413 03:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.413 03:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.413 [2024-11-21 03:20:11.792199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:24.413 [2024-11-21 03:20:11.792257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.413 [2024-11-21 03:20:11.792279] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:24.413 [2024-11-21 03:20:11.792288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.413 [2024-11-21 03:20:11.792635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.413 [2024-11-21 03:20:11.792658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:24.413 [2024-11-21 03:20:11.792726] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:24.413 [2024-11-21 03:20:11.792743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:24.413 [2024-11-21 03:20:11.792858] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:11:24.413 [2024-11-21 03:20:11.792869] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:24.413 [2024-11-21 03:20:11.793101] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:24.413 [2024-11-21 03:20:11.793231] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:11:24.413 [2024-11-21 03:20:11.793250] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:11:24.413 [2024-11-21 03:20:11.793345] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.413 pt4 00:11:24.413 03:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.413 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:24.413 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:24.413 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:24.413 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:24.413 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.413 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.413 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.413 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.413 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.413 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.413 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.413 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.413 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.413 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.413 03:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.413 03:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.413 03:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.413 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.413 "name": "raid_bdev1", 00:11:24.413 "uuid": "8135e097-597a-4964-8f34-b0226d1672d6", 00:11:24.413 "strip_size_kb": 64, 00:11:24.413 "state": "online", 00:11:24.413 "raid_level": "concat", 00:11:24.413 "superblock": true, 00:11:24.413 "num_base_bdevs": 4, 00:11:24.413 "num_base_bdevs_discovered": 4, 00:11:24.413 "num_base_bdevs_operational": 4, 00:11:24.413 "base_bdevs_list": [ 00:11:24.413 { 00:11:24.413 "name": "pt1", 00:11:24.413 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:24.413 "is_configured": true, 00:11:24.413 "data_offset": 2048, 00:11:24.413 "data_size": 63488 00:11:24.413 }, 00:11:24.413 { 00:11:24.413 "name": "pt2", 00:11:24.413 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:24.413 "is_configured": true, 00:11:24.413 "data_offset": 2048, 00:11:24.413 "data_size": 63488 00:11:24.413 }, 00:11:24.413 { 00:11:24.413 "name": "pt3", 00:11:24.413 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:24.413 "is_configured": true, 00:11:24.413 "data_offset": 2048, 00:11:24.413 "data_size": 63488 00:11:24.413 }, 00:11:24.413 { 00:11:24.413 "name": "pt4", 00:11:24.413 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:24.413 "is_configured": true, 00:11:24.413 "data_offset": 2048, 00:11:24.413 "data_size": 63488 00:11:24.413 } 00:11:24.413 ] 00:11:24.413 }' 00:11:24.413 03:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.413 03:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.673 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:24.673 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:24.673 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:24.673 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:24.673 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:24.673 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:24.673 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:24.673 03:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.673 03:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.932 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:24.932 [2024-11-21 03:20:12.240715] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:24.932 03:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.932 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:24.932 "name": "raid_bdev1", 00:11:24.932 "aliases": [ 00:11:24.932 "8135e097-597a-4964-8f34-b0226d1672d6" 00:11:24.932 ], 00:11:24.932 "product_name": "Raid Volume", 00:11:24.932 "block_size": 512, 00:11:24.932 "num_blocks": 253952, 00:11:24.932 "uuid": "8135e097-597a-4964-8f34-b0226d1672d6", 00:11:24.932 "assigned_rate_limits": { 00:11:24.932 "rw_ios_per_sec": 0, 00:11:24.932 "rw_mbytes_per_sec": 0, 00:11:24.932 "r_mbytes_per_sec": 0, 00:11:24.932 "w_mbytes_per_sec": 0 00:11:24.932 }, 00:11:24.932 "claimed": false, 00:11:24.932 "zoned": false, 00:11:24.932 "supported_io_types": { 00:11:24.932 "read": true, 00:11:24.932 "write": true, 00:11:24.932 "unmap": true, 00:11:24.932 "flush": true, 00:11:24.932 "reset": true, 00:11:24.932 "nvme_admin": false, 00:11:24.932 "nvme_io": false, 00:11:24.932 "nvme_io_md": false, 00:11:24.932 "write_zeroes": true, 00:11:24.932 "zcopy": false, 00:11:24.932 "get_zone_info": false, 00:11:24.932 "zone_management": false, 00:11:24.932 "zone_append": false, 00:11:24.932 "compare": false, 00:11:24.932 "compare_and_write": false, 00:11:24.932 "abort": false, 00:11:24.932 "seek_hole": false, 00:11:24.932 "seek_data": false, 00:11:24.932 "copy": false, 00:11:24.932 "nvme_iov_md": false 00:11:24.932 }, 00:11:24.932 "memory_domains": [ 00:11:24.932 { 00:11:24.932 "dma_device_id": "system", 00:11:24.932 "dma_device_type": 1 00:11:24.932 }, 00:11:24.932 { 00:11:24.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.932 "dma_device_type": 2 00:11:24.932 }, 00:11:24.932 { 00:11:24.932 "dma_device_id": "system", 00:11:24.932 "dma_device_type": 1 00:11:24.932 }, 00:11:24.932 { 00:11:24.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.932 "dma_device_type": 2 00:11:24.932 }, 00:11:24.932 { 00:11:24.932 "dma_device_id": "system", 00:11:24.932 "dma_device_type": 1 00:11:24.932 }, 00:11:24.932 { 00:11:24.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.932 "dma_device_type": 2 00:11:24.932 }, 00:11:24.932 { 00:11:24.932 "dma_device_id": "system", 00:11:24.932 "dma_device_type": 1 00:11:24.932 }, 00:11:24.932 { 00:11:24.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.932 "dma_device_type": 2 00:11:24.932 } 00:11:24.932 ], 00:11:24.932 "driver_specific": { 00:11:24.932 "raid": { 00:11:24.932 "uuid": "8135e097-597a-4964-8f34-b0226d1672d6", 00:11:24.932 "strip_size_kb": 64, 00:11:24.932 "state": "online", 00:11:24.932 "raid_level": "concat", 00:11:24.932 "superblock": true, 00:11:24.932 "num_base_bdevs": 4, 00:11:24.932 "num_base_bdevs_discovered": 4, 00:11:24.932 "num_base_bdevs_operational": 4, 00:11:24.932 "base_bdevs_list": [ 00:11:24.932 { 00:11:24.932 "name": "pt1", 00:11:24.932 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:24.932 "is_configured": true, 00:11:24.932 "data_offset": 2048, 00:11:24.932 "data_size": 63488 00:11:24.932 }, 00:11:24.932 { 00:11:24.932 "name": "pt2", 00:11:24.932 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:24.932 "is_configured": true, 00:11:24.932 "data_offset": 2048, 00:11:24.932 "data_size": 63488 00:11:24.932 }, 00:11:24.932 { 00:11:24.932 "name": "pt3", 00:11:24.932 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:24.932 "is_configured": true, 00:11:24.932 "data_offset": 2048, 00:11:24.932 "data_size": 63488 00:11:24.932 }, 00:11:24.932 { 00:11:24.932 "name": "pt4", 00:11:24.932 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:24.932 "is_configured": true, 00:11:24.932 "data_offset": 2048, 00:11:24.932 "data_size": 63488 00:11:24.932 } 00:11:24.932 ] 00:11:24.932 } 00:11:24.932 } 00:11:24.932 }' 00:11:24.932 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:24.932 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:24.932 pt2 00:11:24.932 pt3 00:11:24.932 pt4' 00:11:24.932 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.932 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:24.932 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.932 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:24.932 03:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.932 03:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.932 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.933 03:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.933 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.933 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.933 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.933 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:24.933 03:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.933 03:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.933 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.933 03:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.933 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.933 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.933 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.933 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.933 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:24.933 03:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.933 03:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.933 03:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.933 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.933 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.933 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.191 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:25.191 03:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.191 03:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.191 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.191 03:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.191 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.191 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.191 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:25.191 03:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.191 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:25.191 03:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.191 [2024-11-21 03:20:12.556834] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:25.191 03:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.191 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8135e097-597a-4964-8f34-b0226d1672d6 '!=' 8135e097-597a-4964-8f34-b0226d1672d6 ']' 00:11:25.191 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:25.191 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:25.191 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:25.191 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 85521 00:11:25.191 03:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 85521 ']' 00:11:25.191 03:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 85521 00:11:25.191 03:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:25.191 03:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:25.191 03:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85521 00:11:25.191 03:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:25.192 03:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:25.192 killing process with pid 85521 00:11:25.192 03:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85521' 00:11:25.192 03:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 85521 00:11:25.192 [2024-11-21 03:20:12.638792] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:25.192 03:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 85521 00:11:25.192 [2024-11-21 03:20:12.638889] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:25.192 [2024-11-21 03:20:12.639001] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:25.192 [2024-11-21 03:20:12.639029] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:11:25.192 [2024-11-21 03:20:12.684623] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:25.449 03:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:25.449 00:11:25.449 real 0m4.193s 00:11:25.449 user 0m6.585s 00:11:25.449 sys 0m0.981s 00:11:25.449 03:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:25.449 03:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.449 ************************************ 00:11:25.449 END TEST raid_superblock_test 00:11:25.449 ************************************ 00:11:25.449 03:20:12 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:25.449 03:20:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:25.449 03:20:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.449 03:20:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:25.449 ************************************ 00:11:25.449 START TEST raid_read_error_test 00:11:25.449 ************************************ 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.rJQZ9Oycn5 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85769 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85769 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 85769 ']' 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:25.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:25.449 03:20:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.708 [2024-11-21 03:20:13.080977] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:11:25.708 [2024-11-21 03:20:13.081107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85769 ] 00:11:25.708 [2024-11-21 03:20:13.217621] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:25.708 [2024-11-21 03:20:13.239732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.708 [2024-11-21 03:20:13.269865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.966 [2024-11-21 03:20:13.313550] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:25.966 [2024-11-21 03:20:13.313591] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:26.535 03:20:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:26.535 03:20:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:26.535 03:20:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:26.535 03:20:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:26.535 03:20:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.535 03:20:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.535 BaseBdev1_malloc 00:11:26.535 03:20:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.535 03:20:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:26.535 03:20:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.535 03:20:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.535 true 00:11:26.535 03:20:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.535 03:20:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:26.535 03:20:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.535 03:20:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.535 [2024-11-21 03:20:13.957624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:26.535 [2024-11-21 03:20:13.957696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.535 [2024-11-21 03:20:13.957717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:26.535 [2024-11-21 03:20:13.957732] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.535 [2024-11-21 03:20:13.959932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.535 [2024-11-21 03:20:13.959976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:26.535 BaseBdev1 00:11:26.535 03:20:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.535 03:20:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:26.535 03:20:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:26.535 03:20:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.535 03:20:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.535 BaseBdev2_malloc 00:11:26.535 03:20:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.535 03:20:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:26.535 03:20:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.535 03:20:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.535 true 00:11:26.535 03:20:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.535 03:20:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:26.535 03:20:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.535 03:20:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.535 [2024-11-21 03:20:13.998590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:26.535 [2024-11-21 03:20:13.998655] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.535 [2024-11-21 03:20:13.998674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:26.535 [2024-11-21 03:20:13.998687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.535 [2024-11-21 03:20:14.001012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.535 [2024-11-21 03:20:14.001070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:26.535 BaseBdev2 00:11:26.535 03:20:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.535 03:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:26.535 03:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:26.535 03:20:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.535 03:20:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.535 BaseBdev3_malloc 00:11:26.535 03:20:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.535 03:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:26.535 03:20:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.535 03:20:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.535 true 00:11:26.535 03:20:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.535 03:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:26.535 03:20:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.535 03:20:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.535 [2024-11-21 03:20:14.039617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:26.535 [2024-11-21 03:20:14.039681] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.535 [2024-11-21 03:20:14.039701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:26.535 [2024-11-21 03:20:14.039713] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.535 [2024-11-21 03:20:14.041858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.535 [2024-11-21 03:20:14.041899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:26.535 BaseBdev3 00:11:26.535 03:20:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.535 03:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:26.535 03:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:26.535 03:20:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.535 03:20:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.536 BaseBdev4_malloc 00:11:26.536 03:20:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.536 03:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:26.536 03:20:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.536 03:20:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.536 true 00:11:26.536 03:20:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.536 03:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:26.536 03:20:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.536 03:20:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.536 [2024-11-21 03:20:14.091798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:26.536 [2024-11-21 03:20:14.091863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.536 [2024-11-21 03:20:14.091884] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:26.536 [2024-11-21 03:20:14.091894] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.536 [2024-11-21 03:20:14.094101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.536 [2024-11-21 03:20:14.094146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:26.536 BaseBdev4 00:11:26.536 03:20:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.536 03:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:26.536 03:20:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.536 03:20:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.795 [2024-11-21 03:20:14.103855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:26.795 [2024-11-21 03:20:14.105759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:26.795 [2024-11-21 03:20:14.105838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:26.795 [2024-11-21 03:20:14.105893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:26.795 [2024-11-21 03:20:14.106128] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:26.795 [2024-11-21 03:20:14.106150] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:26.795 [2024-11-21 03:20:14.106432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006cb0 00:11:26.795 [2024-11-21 03:20:14.106587] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:26.795 [2024-11-21 03:20:14.106603] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:26.795 [2024-11-21 03:20:14.106752] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.795 03:20:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.795 03:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:26.795 03:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:26.795 03:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.795 03:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.795 03:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.795 03:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.795 03:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.795 03:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.795 03:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.795 03:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.795 03:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.795 03:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.795 03:20:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.795 03:20:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.795 03:20:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.795 03:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.795 "name": "raid_bdev1", 00:11:26.795 "uuid": "cee7904c-a09a-4d0e-bb02-f8cc96880293", 00:11:26.795 "strip_size_kb": 64, 00:11:26.795 "state": "online", 00:11:26.795 "raid_level": "concat", 00:11:26.795 "superblock": true, 00:11:26.795 "num_base_bdevs": 4, 00:11:26.795 "num_base_bdevs_discovered": 4, 00:11:26.795 "num_base_bdevs_operational": 4, 00:11:26.795 "base_bdevs_list": [ 00:11:26.795 { 00:11:26.795 "name": "BaseBdev1", 00:11:26.795 "uuid": "0fc56727-9e86-5dcf-a86a-a760d0b080cf", 00:11:26.795 "is_configured": true, 00:11:26.795 "data_offset": 2048, 00:11:26.795 "data_size": 63488 00:11:26.795 }, 00:11:26.795 { 00:11:26.795 "name": "BaseBdev2", 00:11:26.795 "uuid": "370e2c9f-3ffe-511c-a347-74ebe2cd78ca", 00:11:26.795 "is_configured": true, 00:11:26.795 "data_offset": 2048, 00:11:26.795 "data_size": 63488 00:11:26.795 }, 00:11:26.795 { 00:11:26.795 "name": "BaseBdev3", 00:11:26.795 "uuid": "dd716f44-bc05-5e34-b3a4-110f54b791fc", 00:11:26.795 "is_configured": true, 00:11:26.795 "data_offset": 2048, 00:11:26.795 "data_size": 63488 00:11:26.795 }, 00:11:26.795 { 00:11:26.795 "name": "BaseBdev4", 00:11:26.795 "uuid": "1fbc5a41-2871-5775-b1c6-6619b1d12ab4", 00:11:26.795 "is_configured": true, 00:11:26.795 "data_offset": 2048, 00:11:26.795 "data_size": 63488 00:11:26.795 } 00:11:26.795 ] 00:11:26.795 }' 00:11:26.795 03:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.795 03:20:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.054 03:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:27.054 03:20:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:27.313 [2024-11-21 03:20:14.644429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006e50 00:11:28.256 03:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:28.256 03:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.256 03:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.256 03:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.256 03:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:28.256 03:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:28.256 03:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:28.256 03:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:28.256 03:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:28.256 03:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.256 03:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.256 03:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.256 03:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.256 03:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.256 03:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.256 03:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.256 03:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.256 03:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.256 03:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.256 03:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.256 03:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.256 03:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.256 03:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.256 "name": "raid_bdev1", 00:11:28.256 "uuid": "cee7904c-a09a-4d0e-bb02-f8cc96880293", 00:11:28.256 "strip_size_kb": 64, 00:11:28.256 "state": "online", 00:11:28.256 "raid_level": "concat", 00:11:28.256 "superblock": true, 00:11:28.256 "num_base_bdevs": 4, 00:11:28.256 "num_base_bdevs_discovered": 4, 00:11:28.256 "num_base_bdevs_operational": 4, 00:11:28.256 "base_bdevs_list": [ 00:11:28.256 { 00:11:28.256 "name": "BaseBdev1", 00:11:28.256 "uuid": "0fc56727-9e86-5dcf-a86a-a760d0b080cf", 00:11:28.256 "is_configured": true, 00:11:28.256 "data_offset": 2048, 00:11:28.256 "data_size": 63488 00:11:28.256 }, 00:11:28.256 { 00:11:28.256 "name": "BaseBdev2", 00:11:28.256 "uuid": "370e2c9f-3ffe-511c-a347-74ebe2cd78ca", 00:11:28.256 "is_configured": true, 00:11:28.256 "data_offset": 2048, 00:11:28.256 "data_size": 63488 00:11:28.256 }, 00:11:28.256 { 00:11:28.256 "name": "BaseBdev3", 00:11:28.256 "uuid": "dd716f44-bc05-5e34-b3a4-110f54b791fc", 00:11:28.256 "is_configured": true, 00:11:28.256 "data_offset": 2048, 00:11:28.256 "data_size": 63488 00:11:28.256 }, 00:11:28.256 { 00:11:28.256 "name": "BaseBdev4", 00:11:28.256 "uuid": "1fbc5a41-2871-5775-b1c6-6619b1d12ab4", 00:11:28.256 "is_configured": true, 00:11:28.256 "data_offset": 2048, 00:11:28.256 "data_size": 63488 00:11:28.256 } 00:11:28.256 ] 00:11:28.256 }' 00:11:28.256 03:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.256 03:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.516 03:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:28.516 03:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.516 03:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.516 [2024-11-21 03:20:16.027500] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:28.516 [2024-11-21 03:20:16.027551] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:28.516 [2024-11-21 03:20:16.030210] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:28.516 [2024-11-21 03:20:16.030269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.516 [2024-11-21 03:20:16.030312] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:28.516 [2024-11-21 03:20:16.030326] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:28.516 { 00:11:28.516 "results": [ 00:11:28.516 { 00:11:28.516 "job": "raid_bdev1", 00:11:28.516 "core_mask": "0x1", 00:11:28.516 "workload": "randrw", 00:11:28.516 "percentage": 50, 00:11:28.516 "status": "finished", 00:11:28.516 "queue_depth": 1, 00:11:28.516 "io_size": 131072, 00:11:28.516 "runtime": 1.381127, 00:11:28.516 "iops": 15174.563961170841, 00:11:28.516 "mibps": 1896.8204951463551, 00:11:28.516 "io_failed": 1, 00:11:28.516 "io_timeout": 0, 00:11:28.516 "avg_latency_us": 91.41720400627612, 00:11:28.516 "min_latency_us": 26.664342369040355, 00:11:28.516 "max_latency_us": 1592.2740346901421 00:11:28.516 } 00:11:28.516 ], 00:11:28.516 "core_count": 1 00:11:28.516 } 00:11:28.516 03:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.516 03:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85769 00:11:28.516 03:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 85769 ']' 00:11:28.516 03:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 85769 00:11:28.516 03:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:28.516 03:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:28.516 03:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85769 00:11:28.516 03:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:28.516 03:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:28.516 killing process with pid 85769 00:11:28.516 03:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85769' 00:11:28.516 03:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 85769 00:11:28.516 [2024-11-21 03:20:16.065379] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:28.516 03:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 85769 00:11:28.775 [2024-11-21 03:20:16.102825] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:28.775 03:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.rJQZ9Oycn5 00:11:28.775 03:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:28.775 03:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:28.775 03:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:28.775 03:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:28.775 03:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:28.775 03:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:28.775 03:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:28.775 00:11:28.775 real 0m3.354s 00:11:28.775 user 0m4.224s 00:11:28.775 sys 0m0.577s 00:11:28.775 03:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.775 03:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.775 ************************************ 00:11:28.775 END TEST raid_read_error_test 00:11:28.775 ************************************ 00:11:29.035 03:20:16 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:29.035 03:20:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:29.035 03:20:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.035 03:20:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:29.035 ************************************ 00:11:29.035 START TEST raid_write_error_test 00:11:29.035 ************************************ 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WqUsK6ehis 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85900 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85900 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 85900 ']' 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:29.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:29.035 03:20:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.035 [2024-11-21 03:20:16.510122] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:11:29.035 [2024-11-21 03:20:16.510249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85900 ] 00:11:29.294 [2024-11-21 03:20:16.645498] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:29.294 [2024-11-21 03:20:16.663859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.294 [2024-11-21 03:20:16.693925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.294 [2024-11-21 03:20:16.737294] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:29.294 [2024-11-21 03:20:16.737337] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:29.910 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:29.910 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:29.910 03:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:29.910 03:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:29.910 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.910 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.910 BaseBdev1_malloc 00:11:29.910 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.910 03:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:29.910 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.910 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.910 true 00:11:29.910 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.910 03:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:29.910 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.910 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.910 [2024-11-21 03:20:17.417394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:29.910 [2024-11-21 03:20:17.417469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.910 [2024-11-21 03:20:17.417492] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:29.910 [2024-11-21 03:20:17.417507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.910 [2024-11-21 03:20:17.419943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.910 [2024-11-21 03:20:17.419998] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:29.910 BaseBdev1 00:11:29.911 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.911 03:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:29.911 03:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:29.911 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.911 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.911 BaseBdev2_malloc 00:11:29.911 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.911 03:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:29.911 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.911 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.911 true 00:11:29.911 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.911 03:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:29.911 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.911 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.911 [2024-11-21 03:20:17.458396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:29.911 [2024-11-21 03:20:17.458461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.911 [2024-11-21 03:20:17.458478] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:29.911 [2024-11-21 03:20:17.458490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.911 [2024-11-21 03:20:17.460639] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.911 [2024-11-21 03:20:17.460683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:29.911 BaseBdev2 00:11:29.911 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.911 03:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:29.911 03:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:29.911 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.911 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.171 BaseBdev3_malloc 00:11:30.171 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.171 03:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:30.171 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.171 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.171 true 00:11:30.171 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.171 03:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:30.171 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.171 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.171 [2024-11-21 03:20:17.499502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:30.171 [2024-11-21 03:20:17.499570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.171 [2024-11-21 03:20:17.499594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:30.171 [2024-11-21 03:20:17.499607] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.171 [2024-11-21 03:20:17.501821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.171 [2024-11-21 03:20:17.501866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:30.171 BaseBdev3 00:11:30.171 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.171 03:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:30.171 03:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:30.171 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.171 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.171 BaseBdev4_malloc 00:11:30.171 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.171 03:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:30.171 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.171 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.171 true 00:11:30.171 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.171 03:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:30.171 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.171 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.171 [2024-11-21 03:20:17.551068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:30.171 [2024-11-21 03:20:17.551147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.171 [2024-11-21 03:20:17.551174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:30.171 [2024-11-21 03:20:17.551187] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.171 [2024-11-21 03:20:17.553686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.171 [2024-11-21 03:20:17.553743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:30.171 BaseBdev4 00:11:30.171 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.171 03:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:30.171 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.171 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.171 [2024-11-21 03:20:17.563128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:30.171 [2024-11-21 03:20:17.565391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:30.172 [2024-11-21 03:20:17.565485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:30.172 [2024-11-21 03:20:17.565554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:30.172 [2024-11-21 03:20:17.565817] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:30.172 [2024-11-21 03:20:17.565843] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:30.172 [2024-11-21 03:20:17.566194] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006cb0 00:11:30.172 [2024-11-21 03:20:17.566383] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:30.172 [2024-11-21 03:20:17.566403] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:30.172 [2024-11-21 03:20:17.566597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.172 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.172 03:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:30.172 03:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:30.172 03:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.172 03:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:30.172 03:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.172 03:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.172 03:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.172 03:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.172 03:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.172 03:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.172 03:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.172 03:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.172 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.172 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.172 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.172 03:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.172 "name": "raid_bdev1", 00:11:30.172 "uuid": "796e5b7c-9a58-425d-84c1-b8dd9a202de0", 00:11:30.172 "strip_size_kb": 64, 00:11:30.172 "state": "online", 00:11:30.172 "raid_level": "concat", 00:11:30.172 "superblock": true, 00:11:30.172 "num_base_bdevs": 4, 00:11:30.172 "num_base_bdevs_discovered": 4, 00:11:30.172 "num_base_bdevs_operational": 4, 00:11:30.172 "base_bdevs_list": [ 00:11:30.172 { 00:11:30.172 "name": "BaseBdev1", 00:11:30.172 "uuid": "0a6dccf3-1fd7-5de1-a394-aff32fad4823", 00:11:30.172 "is_configured": true, 00:11:30.172 "data_offset": 2048, 00:11:30.172 "data_size": 63488 00:11:30.172 }, 00:11:30.172 { 00:11:30.172 "name": "BaseBdev2", 00:11:30.172 "uuid": "d39a3061-24b0-57c2-a8f9-3cb3c355fd97", 00:11:30.172 "is_configured": true, 00:11:30.172 "data_offset": 2048, 00:11:30.172 "data_size": 63488 00:11:30.172 }, 00:11:30.172 { 00:11:30.172 "name": "BaseBdev3", 00:11:30.172 "uuid": "8f1a2549-2ab8-58d2-a294-b4d16385a227", 00:11:30.172 "is_configured": true, 00:11:30.172 "data_offset": 2048, 00:11:30.172 "data_size": 63488 00:11:30.172 }, 00:11:30.172 { 00:11:30.172 "name": "BaseBdev4", 00:11:30.172 "uuid": "0dd9adee-554d-50cf-a225-a54c6d4a4cf5", 00:11:30.172 "is_configured": true, 00:11:30.172 "data_offset": 2048, 00:11:30.172 "data_size": 63488 00:11:30.172 } 00:11:30.172 ] 00:11:30.172 }' 00:11:30.172 03:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.172 03:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.741 03:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:30.741 03:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:30.741 [2024-11-21 03:20:18.091647] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006e50 00:11:31.678 03:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:31.678 03:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.678 03:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.678 03:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.678 03:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:31.678 03:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:31.678 03:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:31.678 03:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:31.678 03:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:31.678 03:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.678 03:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:31.678 03:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.678 03:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.678 03:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.678 03:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.678 03:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.678 03:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.678 03:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.678 03:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.678 03:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.678 03:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.678 03:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.678 03:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.678 "name": "raid_bdev1", 00:11:31.678 "uuid": "796e5b7c-9a58-425d-84c1-b8dd9a202de0", 00:11:31.678 "strip_size_kb": 64, 00:11:31.678 "state": "online", 00:11:31.678 "raid_level": "concat", 00:11:31.678 "superblock": true, 00:11:31.678 "num_base_bdevs": 4, 00:11:31.678 "num_base_bdevs_discovered": 4, 00:11:31.678 "num_base_bdevs_operational": 4, 00:11:31.678 "base_bdevs_list": [ 00:11:31.678 { 00:11:31.678 "name": "BaseBdev1", 00:11:31.678 "uuid": "0a6dccf3-1fd7-5de1-a394-aff32fad4823", 00:11:31.678 "is_configured": true, 00:11:31.678 "data_offset": 2048, 00:11:31.678 "data_size": 63488 00:11:31.678 }, 00:11:31.678 { 00:11:31.678 "name": "BaseBdev2", 00:11:31.678 "uuid": "d39a3061-24b0-57c2-a8f9-3cb3c355fd97", 00:11:31.678 "is_configured": true, 00:11:31.678 "data_offset": 2048, 00:11:31.678 "data_size": 63488 00:11:31.678 }, 00:11:31.678 { 00:11:31.678 "name": "BaseBdev3", 00:11:31.678 "uuid": "8f1a2549-2ab8-58d2-a294-b4d16385a227", 00:11:31.678 "is_configured": true, 00:11:31.678 "data_offset": 2048, 00:11:31.678 "data_size": 63488 00:11:31.678 }, 00:11:31.678 { 00:11:31.678 "name": "BaseBdev4", 00:11:31.678 "uuid": "0dd9adee-554d-50cf-a225-a54c6d4a4cf5", 00:11:31.678 "is_configured": true, 00:11:31.678 "data_offset": 2048, 00:11:31.678 "data_size": 63488 00:11:31.678 } 00:11:31.678 ] 00:11:31.678 }' 00:11:31.678 03:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.678 03:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.937 03:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:31.937 03:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.937 03:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.937 [2024-11-21 03:20:19.479665] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:31.937 [2024-11-21 03:20:19.479714] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:31.937 [2024-11-21 03:20:19.482275] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:31.937 [2024-11-21 03:20:19.482334] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:31.937 [2024-11-21 03:20:19.482376] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:31.937 [2024-11-21 03:20:19.482390] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:31.937 { 00:11:31.937 "results": [ 00:11:31.937 { 00:11:31.937 "job": "raid_bdev1", 00:11:31.937 "core_mask": "0x1", 00:11:31.937 "workload": "randrw", 00:11:31.937 "percentage": 50, 00:11:31.937 "status": "finished", 00:11:31.937 "queue_depth": 1, 00:11:31.937 "io_size": 131072, 00:11:31.937 "runtime": 1.385937, 00:11:31.937 "iops": 14921.31316214229, 00:11:31.937 "mibps": 1865.1641452677864, 00:11:31.937 "io_failed": 1, 00:11:31.937 "io_timeout": 0, 00:11:31.937 "avg_latency_us": 92.95723346454346, 00:11:31.937 "min_latency_us": 26.664342369040355, 00:11:31.937 "max_latency_us": 1685.0971846945001 00:11:31.937 } 00:11:31.937 ], 00:11:31.937 "core_count": 1 00:11:31.937 } 00:11:31.937 03:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.937 03:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85900 00:11:31.937 03:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 85900 ']' 00:11:31.937 03:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 85900 00:11:31.937 03:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:31.937 03:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:31.937 03:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85900 00:11:32.196 03:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:32.196 03:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:32.196 killing process with pid 85900 00:11:32.196 03:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85900' 00:11:32.196 03:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 85900 00:11:32.196 [2024-11-21 03:20:19.530728] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:32.196 03:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 85900 00:11:32.196 [2024-11-21 03:20:19.567382] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:32.454 03:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WqUsK6ehis 00:11:32.454 03:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:32.454 03:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:32.454 03:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:32.454 03:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:32.454 03:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:32.454 03:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:32.454 03:20:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:32.454 00:11:32.454 real 0m3.390s 00:11:32.454 user 0m4.282s 00:11:32.454 sys 0m0.575s 00:11:32.454 03:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.454 03:20:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.454 ************************************ 00:11:32.454 END TEST raid_write_error_test 00:11:32.454 ************************************ 00:11:32.454 03:20:19 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:32.454 03:20:19 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:32.454 03:20:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:32.454 03:20:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.454 03:20:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:32.454 ************************************ 00:11:32.454 START TEST raid_state_function_test 00:11:32.454 ************************************ 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=86030 00:11:32.454 Process raid pid: 86030 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86030' 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 86030 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 86030 ']' 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.454 03:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.455 03:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.455 03:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.455 03:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.455 [2024-11-21 03:20:19.976599] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:11:32.455 [2024-11-21 03:20:19.977252] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.713 [2024-11-21 03:20:20.121273] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:32.713 [2024-11-21 03:20:20.157103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.713 [2024-11-21 03:20:20.188053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.713 [2024-11-21 03:20:20.232233] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:32.713 [2024-11-21 03:20:20.232270] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.280 03:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.280 03:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:33.280 03:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:33.280 03:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.280 03:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.280 [2024-11-21 03:20:20.815465] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:33.280 [2024-11-21 03:20:20.815525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:33.280 [2024-11-21 03:20:20.815545] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:33.280 [2024-11-21 03:20:20.815554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:33.280 [2024-11-21 03:20:20.815565] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:33.280 [2024-11-21 03:20:20.815573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:33.280 [2024-11-21 03:20:20.815584] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:33.280 [2024-11-21 03:20:20.815591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:33.280 03:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.280 03:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:33.280 03:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.280 03:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.280 03:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.280 03:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.280 03:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.280 03:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.280 03:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.280 03:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.280 03:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.280 03:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.280 03:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.280 03:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.280 03:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.539 03:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.539 03:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.539 "name": "Existed_Raid", 00:11:33.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.539 "strip_size_kb": 0, 00:11:33.539 "state": "configuring", 00:11:33.539 "raid_level": "raid1", 00:11:33.539 "superblock": false, 00:11:33.539 "num_base_bdevs": 4, 00:11:33.539 "num_base_bdevs_discovered": 0, 00:11:33.539 "num_base_bdevs_operational": 4, 00:11:33.539 "base_bdevs_list": [ 00:11:33.539 { 00:11:33.539 "name": "BaseBdev1", 00:11:33.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.539 "is_configured": false, 00:11:33.539 "data_offset": 0, 00:11:33.539 "data_size": 0 00:11:33.539 }, 00:11:33.539 { 00:11:33.539 "name": "BaseBdev2", 00:11:33.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.539 "is_configured": false, 00:11:33.539 "data_offset": 0, 00:11:33.539 "data_size": 0 00:11:33.539 }, 00:11:33.540 { 00:11:33.540 "name": "BaseBdev3", 00:11:33.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.540 "is_configured": false, 00:11:33.540 "data_offset": 0, 00:11:33.540 "data_size": 0 00:11:33.540 }, 00:11:33.540 { 00:11:33.540 "name": "BaseBdev4", 00:11:33.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.540 "is_configured": false, 00:11:33.540 "data_offset": 0, 00:11:33.540 "data_size": 0 00:11:33.540 } 00:11:33.540 ] 00:11:33.540 }' 00:11:33.540 03:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.540 03:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.799 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:33.799 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.799 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.799 [2024-11-21 03:20:21.271485] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:33.799 [2024-11-21 03:20:21.271530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:11:33.799 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.799 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:33.799 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.799 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.799 [2024-11-21 03:20:21.283531] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:33.799 [2024-11-21 03:20:21.283577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:33.799 [2024-11-21 03:20:21.283590] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:33.799 [2024-11-21 03:20:21.283600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:33.799 [2024-11-21 03:20:21.283609] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:33.799 [2024-11-21 03:20:21.283617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:33.799 [2024-11-21 03:20:21.283626] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:33.799 [2024-11-21 03:20:21.283634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:33.799 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.799 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:33.799 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.799 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.799 [2024-11-21 03:20:21.301004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:33.799 BaseBdev1 00:11:33.799 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.799 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:33.799 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:33.799 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:33.799 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:33.799 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:33.799 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:33.799 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:33.799 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.799 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.799 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.799 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:33.799 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.799 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.799 [ 00:11:33.799 { 00:11:33.799 "name": "BaseBdev1", 00:11:33.800 "aliases": [ 00:11:33.800 "205fd18e-afa1-40e8-847d-cb363eac1121" 00:11:33.800 ], 00:11:33.800 "product_name": "Malloc disk", 00:11:33.800 "block_size": 512, 00:11:33.800 "num_blocks": 65536, 00:11:33.800 "uuid": "205fd18e-afa1-40e8-847d-cb363eac1121", 00:11:33.800 "assigned_rate_limits": { 00:11:33.800 "rw_ios_per_sec": 0, 00:11:33.800 "rw_mbytes_per_sec": 0, 00:11:33.800 "r_mbytes_per_sec": 0, 00:11:33.800 "w_mbytes_per_sec": 0 00:11:33.800 }, 00:11:33.800 "claimed": true, 00:11:33.800 "claim_type": "exclusive_write", 00:11:33.800 "zoned": false, 00:11:33.800 "supported_io_types": { 00:11:33.800 "read": true, 00:11:33.800 "write": true, 00:11:33.800 "unmap": true, 00:11:33.800 "flush": true, 00:11:33.800 "reset": true, 00:11:33.800 "nvme_admin": false, 00:11:33.800 "nvme_io": false, 00:11:33.800 "nvme_io_md": false, 00:11:33.800 "write_zeroes": true, 00:11:33.800 "zcopy": true, 00:11:33.800 "get_zone_info": false, 00:11:33.800 "zone_management": false, 00:11:33.800 "zone_append": false, 00:11:33.800 "compare": false, 00:11:33.800 "compare_and_write": false, 00:11:33.800 "abort": true, 00:11:33.800 "seek_hole": false, 00:11:33.800 "seek_data": false, 00:11:33.800 "copy": true, 00:11:33.800 "nvme_iov_md": false 00:11:33.800 }, 00:11:33.800 "memory_domains": [ 00:11:33.800 { 00:11:33.800 "dma_device_id": "system", 00:11:33.800 "dma_device_type": 1 00:11:33.800 }, 00:11:33.800 { 00:11:33.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.800 "dma_device_type": 2 00:11:33.800 } 00:11:33.800 ], 00:11:33.800 "driver_specific": {} 00:11:33.800 } 00:11:33.800 ] 00:11:33.800 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.800 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:33.800 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:33.800 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.800 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.800 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.800 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.800 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.800 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.800 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.800 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.800 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.800 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.800 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.800 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.800 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.082 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.082 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.082 "name": "Existed_Raid", 00:11:34.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.082 "strip_size_kb": 0, 00:11:34.082 "state": "configuring", 00:11:34.082 "raid_level": "raid1", 00:11:34.082 "superblock": false, 00:11:34.082 "num_base_bdevs": 4, 00:11:34.082 "num_base_bdevs_discovered": 1, 00:11:34.082 "num_base_bdevs_operational": 4, 00:11:34.082 "base_bdevs_list": [ 00:11:34.082 { 00:11:34.082 "name": "BaseBdev1", 00:11:34.082 "uuid": "205fd18e-afa1-40e8-847d-cb363eac1121", 00:11:34.082 "is_configured": true, 00:11:34.082 "data_offset": 0, 00:11:34.082 "data_size": 65536 00:11:34.082 }, 00:11:34.082 { 00:11:34.082 "name": "BaseBdev2", 00:11:34.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.082 "is_configured": false, 00:11:34.082 "data_offset": 0, 00:11:34.082 "data_size": 0 00:11:34.082 }, 00:11:34.082 { 00:11:34.082 "name": "BaseBdev3", 00:11:34.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.082 "is_configured": false, 00:11:34.082 "data_offset": 0, 00:11:34.082 "data_size": 0 00:11:34.082 }, 00:11:34.082 { 00:11:34.082 "name": "BaseBdev4", 00:11:34.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.082 "is_configured": false, 00:11:34.082 "data_offset": 0, 00:11:34.082 "data_size": 0 00:11:34.082 } 00:11:34.082 ] 00:11:34.082 }' 00:11:34.082 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.082 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.340 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:34.340 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.340 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.340 [2024-11-21 03:20:21.813213] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:34.340 [2024-11-21 03:20:21.813289] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:34.340 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.340 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:34.340 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.340 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.340 [2024-11-21 03:20:21.825249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:34.340 [2024-11-21 03:20:21.827242] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:34.340 [2024-11-21 03:20:21.827289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:34.340 [2024-11-21 03:20:21.827301] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:34.341 [2024-11-21 03:20:21.827311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:34.341 [2024-11-21 03:20:21.827320] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:34.341 [2024-11-21 03:20:21.827328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:34.341 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.341 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:34.341 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:34.341 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:34.341 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.341 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.341 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.341 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.341 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.341 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.341 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.341 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.341 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.341 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.341 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.341 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.341 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.341 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.341 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.341 "name": "Existed_Raid", 00:11:34.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.341 "strip_size_kb": 0, 00:11:34.341 "state": "configuring", 00:11:34.341 "raid_level": "raid1", 00:11:34.341 "superblock": false, 00:11:34.341 "num_base_bdevs": 4, 00:11:34.341 "num_base_bdevs_discovered": 1, 00:11:34.341 "num_base_bdevs_operational": 4, 00:11:34.341 "base_bdevs_list": [ 00:11:34.341 { 00:11:34.341 "name": "BaseBdev1", 00:11:34.341 "uuid": "205fd18e-afa1-40e8-847d-cb363eac1121", 00:11:34.341 "is_configured": true, 00:11:34.341 "data_offset": 0, 00:11:34.341 "data_size": 65536 00:11:34.341 }, 00:11:34.341 { 00:11:34.341 "name": "BaseBdev2", 00:11:34.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.341 "is_configured": false, 00:11:34.341 "data_offset": 0, 00:11:34.341 "data_size": 0 00:11:34.341 }, 00:11:34.341 { 00:11:34.341 "name": "BaseBdev3", 00:11:34.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.341 "is_configured": false, 00:11:34.341 "data_offset": 0, 00:11:34.341 "data_size": 0 00:11:34.341 }, 00:11:34.341 { 00:11:34.341 "name": "BaseBdev4", 00:11:34.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.341 "is_configured": false, 00:11:34.341 "data_offset": 0, 00:11:34.341 "data_size": 0 00:11:34.341 } 00:11:34.341 ] 00:11:34.341 }' 00:11:34.341 03:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.341 03:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.909 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:34.909 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.909 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.909 [2024-11-21 03:20:22.280798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:34.909 BaseBdev2 00:11:34.909 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.909 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:34.909 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:34.909 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:34.909 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:34.909 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:34.909 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:34.909 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:34.909 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.909 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.909 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.909 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:34.909 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.909 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.909 [ 00:11:34.909 { 00:11:34.909 "name": "BaseBdev2", 00:11:34.909 "aliases": [ 00:11:34.909 "7462561b-7c9f-4fda-b9ee-48a95b5771ba" 00:11:34.909 ], 00:11:34.909 "product_name": "Malloc disk", 00:11:34.909 "block_size": 512, 00:11:34.909 "num_blocks": 65536, 00:11:34.909 "uuid": "7462561b-7c9f-4fda-b9ee-48a95b5771ba", 00:11:34.909 "assigned_rate_limits": { 00:11:34.909 "rw_ios_per_sec": 0, 00:11:34.909 "rw_mbytes_per_sec": 0, 00:11:34.909 "r_mbytes_per_sec": 0, 00:11:34.909 "w_mbytes_per_sec": 0 00:11:34.909 }, 00:11:34.909 "claimed": true, 00:11:34.909 "claim_type": "exclusive_write", 00:11:34.909 "zoned": false, 00:11:34.909 "supported_io_types": { 00:11:34.909 "read": true, 00:11:34.909 "write": true, 00:11:34.909 "unmap": true, 00:11:34.909 "flush": true, 00:11:34.909 "reset": true, 00:11:34.909 "nvme_admin": false, 00:11:34.909 "nvme_io": false, 00:11:34.909 "nvme_io_md": false, 00:11:34.909 "write_zeroes": true, 00:11:34.909 "zcopy": true, 00:11:34.909 "get_zone_info": false, 00:11:34.909 "zone_management": false, 00:11:34.909 "zone_append": false, 00:11:34.909 "compare": false, 00:11:34.909 "compare_and_write": false, 00:11:34.909 "abort": true, 00:11:34.909 "seek_hole": false, 00:11:34.909 "seek_data": false, 00:11:34.909 "copy": true, 00:11:34.909 "nvme_iov_md": false 00:11:34.909 }, 00:11:34.909 "memory_domains": [ 00:11:34.909 { 00:11:34.909 "dma_device_id": "system", 00:11:34.909 "dma_device_type": 1 00:11:34.910 }, 00:11:34.910 { 00:11:34.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.910 "dma_device_type": 2 00:11:34.910 } 00:11:34.910 ], 00:11:34.910 "driver_specific": {} 00:11:34.910 } 00:11:34.910 ] 00:11:34.910 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.910 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:34.910 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:34.910 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:34.910 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:34.910 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.910 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.910 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.910 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.910 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.910 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.910 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.910 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.910 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.910 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.910 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.910 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.910 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.910 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.910 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.910 "name": "Existed_Raid", 00:11:34.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.910 "strip_size_kb": 0, 00:11:34.910 "state": "configuring", 00:11:34.910 "raid_level": "raid1", 00:11:34.910 "superblock": false, 00:11:34.910 "num_base_bdevs": 4, 00:11:34.910 "num_base_bdevs_discovered": 2, 00:11:34.910 "num_base_bdevs_operational": 4, 00:11:34.910 "base_bdevs_list": [ 00:11:34.910 { 00:11:34.910 "name": "BaseBdev1", 00:11:34.910 "uuid": "205fd18e-afa1-40e8-847d-cb363eac1121", 00:11:34.910 "is_configured": true, 00:11:34.910 "data_offset": 0, 00:11:34.910 "data_size": 65536 00:11:34.910 }, 00:11:34.910 { 00:11:34.910 "name": "BaseBdev2", 00:11:34.910 "uuid": "7462561b-7c9f-4fda-b9ee-48a95b5771ba", 00:11:34.910 "is_configured": true, 00:11:34.910 "data_offset": 0, 00:11:34.910 "data_size": 65536 00:11:34.910 }, 00:11:34.910 { 00:11:34.910 "name": "BaseBdev3", 00:11:34.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.910 "is_configured": false, 00:11:34.910 "data_offset": 0, 00:11:34.910 "data_size": 0 00:11:34.910 }, 00:11:34.910 { 00:11:34.910 "name": "BaseBdev4", 00:11:34.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.910 "is_configured": false, 00:11:34.910 "data_offset": 0, 00:11:34.910 "data_size": 0 00:11:34.910 } 00:11:34.910 ] 00:11:34.910 }' 00:11:34.910 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.910 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.479 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:35.479 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.479 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.479 [2024-11-21 03:20:22.777576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:35.479 BaseBdev3 00:11:35.479 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.479 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:35.479 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:35.479 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.479 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:35.479 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.479 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.479 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.479 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.479 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.479 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.479 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:35.479 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.479 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.479 [ 00:11:35.479 { 00:11:35.479 "name": "BaseBdev3", 00:11:35.479 "aliases": [ 00:11:35.479 "e3fff3bc-eaea-4d7e-8449-ad0da9ab4b19" 00:11:35.479 ], 00:11:35.479 "product_name": "Malloc disk", 00:11:35.479 "block_size": 512, 00:11:35.479 "num_blocks": 65536, 00:11:35.479 "uuid": "e3fff3bc-eaea-4d7e-8449-ad0da9ab4b19", 00:11:35.479 "assigned_rate_limits": { 00:11:35.479 "rw_ios_per_sec": 0, 00:11:35.479 "rw_mbytes_per_sec": 0, 00:11:35.479 "r_mbytes_per_sec": 0, 00:11:35.479 "w_mbytes_per_sec": 0 00:11:35.479 }, 00:11:35.479 "claimed": true, 00:11:35.479 "claim_type": "exclusive_write", 00:11:35.479 "zoned": false, 00:11:35.479 "supported_io_types": { 00:11:35.479 "read": true, 00:11:35.479 "write": true, 00:11:35.479 "unmap": true, 00:11:35.479 "flush": true, 00:11:35.479 "reset": true, 00:11:35.479 "nvme_admin": false, 00:11:35.479 "nvme_io": false, 00:11:35.479 "nvme_io_md": false, 00:11:35.479 "write_zeroes": true, 00:11:35.479 "zcopy": true, 00:11:35.479 "get_zone_info": false, 00:11:35.479 "zone_management": false, 00:11:35.479 "zone_append": false, 00:11:35.479 "compare": false, 00:11:35.479 "compare_and_write": false, 00:11:35.479 "abort": true, 00:11:35.479 "seek_hole": false, 00:11:35.479 "seek_data": false, 00:11:35.479 "copy": true, 00:11:35.479 "nvme_iov_md": false 00:11:35.479 }, 00:11:35.479 "memory_domains": [ 00:11:35.479 { 00:11:35.479 "dma_device_id": "system", 00:11:35.479 "dma_device_type": 1 00:11:35.479 }, 00:11:35.479 { 00:11:35.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.479 "dma_device_type": 2 00:11:35.479 } 00:11:35.479 ], 00:11:35.479 "driver_specific": {} 00:11:35.479 } 00:11:35.479 ] 00:11:35.479 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.480 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:35.480 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:35.480 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:35.480 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:35.480 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.480 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.480 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.480 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.480 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.480 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.480 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.480 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.480 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.480 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.480 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.480 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.480 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.480 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.480 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.480 "name": "Existed_Raid", 00:11:35.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.480 "strip_size_kb": 0, 00:11:35.480 "state": "configuring", 00:11:35.480 "raid_level": "raid1", 00:11:35.480 "superblock": false, 00:11:35.480 "num_base_bdevs": 4, 00:11:35.480 "num_base_bdevs_discovered": 3, 00:11:35.480 "num_base_bdevs_operational": 4, 00:11:35.480 "base_bdevs_list": [ 00:11:35.480 { 00:11:35.480 "name": "BaseBdev1", 00:11:35.480 "uuid": "205fd18e-afa1-40e8-847d-cb363eac1121", 00:11:35.480 "is_configured": true, 00:11:35.480 "data_offset": 0, 00:11:35.480 "data_size": 65536 00:11:35.480 }, 00:11:35.480 { 00:11:35.480 "name": "BaseBdev2", 00:11:35.480 "uuid": "7462561b-7c9f-4fda-b9ee-48a95b5771ba", 00:11:35.480 "is_configured": true, 00:11:35.480 "data_offset": 0, 00:11:35.480 "data_size": 65536 00:11:35.480 }, 00:11:35.480 { 00:11:35.480 "name": "BaseBdev3", 00:11:35.480 "uuid": "e3fff3bc-eaea-4d7e-8449-ad0da9ab4b19", 00:11:35.480 "is_configured": true, 00:11:35.480 "data_offset": 0, 00:11:35.480 "data_size": 65536 00:11:35.480 }, 00:11:35.480 { 00:11:35.480 "name": "BaseBdev4", 00:11:35.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.480 "is_configured": false, 00:11:35.480 "data_offset": 0, 00:11:35.480 "data_size": 0 00:11:35.480 } 00:11:35.480 ] 00:11:35.480 }' 00:11:35.480 03:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.480 03:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.738 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:35.738 03:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.738 03:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.738 [2024-11-21 03:20:23.301673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:35.738 [2024-11-21 03:20:23.301818] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:11:35.738 [2024-11-21 03:20:23.301870] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:35.738 [2024-11-21 03:20:23.302271] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:11:35.997 [2024-11-21 03:20:23.302502] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:11:35.998 [2024-11-21 03:20:23.302566] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:11:35.998 [2024-11-21 03:20:23.302888] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.998 BaseBdev4 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.998 [ 00:11:35.998 { 00:11:35.998 "name": "BaseBdev4", 00:11:35.998 "aliases": [ 00:11:35.998 "e23c48cd-70a7-46e5-8e85-a27eca086f06" 00:11:35.998 ], 00:11:35.998 "product_name": "Malloc disk", 00:11:35.998 "block_size": 512, 00:11:35.998 "num_blocks": 65536, 00:11:35.998 "uuid": "e23c48cd-70a7-46e5-8e85-a27eca086f06", 00:11:35.998 "assigned_rate_limits": { 00:11:35.998 "rw_ios_per_sec": 0, 00:11:35.998 "rw_mbytes_per_sec": 0, 00:11:35.998 "r_mbytes_per_sec": 0, 00:11:35.998 "w_mbytes_per_sec": 0 00:11:35.998 }, 00:11:35.998 "claimed": true, 00:11:35.998 "claim_type": "exclusive_write", 00:11:35.998 "zoned": false, 00:11:35.998 "supported_io_types": { 00:11:35.998 "read": true, 00:11:35.998 "write": true, 00:11:35.998 "unmap": true, 00:11:35.998 "flush": true, 00:11:35.998 "reset": true, 00:11:35.998 "nvme_admin": false, 00:11:35.998 "nvme_io": false, 00:11:35.998 "nvme_io_md": false, 00:11:35.998 "write_zeroes": true, 00:11:35.998 "zcopy": true, 00:11:35.998 "get_zone_info": false, 00:11:35.998 "zone_management": false, 00:11:35.998 "zone_append": false, 00:11:35.998 "compare": false, 00:11:35.998 "compare_and_write": false, 00:11:35.998 "abort": true, 00:11:35.998 "seek_hole": false, 00:11:35.998 "seek_data": false, 00:11:35.998 "copy": true, 00:11:35.998 "nvme_iov_md": false 00:11:35.998 }, 00:11:35.998 "memory_domains": [ 00:11:35.998 { 00:11:35.998 "dma_device_id": "system", 00:11:35.998 "dma_device_type": 1 00:11:35.998 }, 00:11:35.998 { 00:11:35.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.998 "dma_device_type": 2 00:11:35.998 } 00:11:35.998 ], 00:11:35.998 "driver_specific": {} 00:11:35.998 } 00:11:35.998 ] 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.998 "name": "Existed_Raid", 00:11:35.998 "uuid": "9016f68b-61e1-488b-8a33-4840b84dbd86", 00:11:35.998 "strip_size_kb": 0, 00:11:35.998 "state": "online", 00:11:35.998 "raid_level": "raid1", 00:11:35.998 "superblock": false, 00:11:35.998 "num_base_bdevs": 4, 00:11:35.998 "num_base_bdevs_discovered": 4, 00:11:35.998 "num_base_bdevs_operational": 4, 00:11:35.998 "base_bdevs_list": [ 00:11:35.998 { 00:11:35.998 "name": "BaseBdev1", 00:11:35.998 "uuid": "205fd18e-afa1-40e8-847d-cb363eac1121", 00:11:35.998 "is_configured": true, 00:11:35.998 "data_offset": 0, 00:11:35.998 "data_size": 65536 00:11:35.998 }, 00:11:35.998 { 00:11:35.998 "name": "BaseBdev2", 00:11:35.998 "uuid": "7462561b-7c9f-4fda-b9ee-48a95b5771ba", 00:11:35.998 "is_configured": true, 00:11:35.998 "data_offset": 0, 00:11:35.998 "data_size": 65536 00:11:35.998 }, 00:11:35.998 { 00:11:35.998 "name": "BaseBdev3", 00:11:35.998 "uuid": "e3fff3bc-eaea-4d7e-8449-ad0da9ab4b19", 00:11:35.998 "is_configured": true, 00:11:35.998 "data_offset": 0, 00:11:35.998 "data_size": 65536 00:11:35.998 }, 00:11:35.998 { 00:11:35.998 "name": "BaseBdev4", 00:11:35.998 "uuid": "e23c48cd-70a7-46e5-8e85-a27eca086f06", 00:11:35.998 "is_configured": true, 00:11:35.998 "data_offset": 0, 00:11:35.998 "data_size": 65536 00:11:35.998 } 00:11:35.998 ] 00:11:35.998 }' 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.998 03:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.258 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:36.258 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:36.258 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:36.258 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:36.258 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:36.258 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:36.258 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:36.258 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:36.258 03:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.258 03:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.517 [2024-11-21 03:20:23.822350] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.517 03:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.517 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:36.517 "name": "Existed_Raid", 00:11:36.517 "aliases": [ 00:11:36.517 "9016f68b-61e1-488b-8a33-4840b84dbd86" 00:11:36.517 ], 00:11:36.517 "product_name": "Raid Volume", 00:11:36.517 "block_size": 512, 00:11:36.517 "num_blocks": 65536, 00:11:36.517 "uuid": "9016f68b-61e1-488b-8a33-4840b84dbd86", 00:11:36.517 "assigned_rate_limits": { 00:11:36.517 "rw_ios_per_sec": 0, 00:11:36.517 "rw_mbytes_per_sec": 0, 00:11:36.517 "r_mbytes_per_sec": 0, 00:11:36.517 "w_mbytes_per_sec": 0 00:11:36.517 }, 00:11:36.517 "claimed": false, 00:11:36.517 "zoned": false, 00:11:36.517 "supported_io_types": { 00:11:36.517 "read": true, 00:11:36.517 "write": true, 00:11:36.517 "unmap": false, 00:11:36.517 "flush": false, 00:11:36.517 "reset": true, 00:11:36.517 "nvme_admin": false, 00:11:36.517 "nvme_io": false, 00:11:36.517 "nvme_io_md": false, 00:11:36.517 "write_zeroes": true, 00:11:36.517 "zcopy": false, 00:11:36.517 "get_zone_info": false, 00:11:36.517 "zone_management": false, 00:11:36.517 "zone_append": false, 00:11:36.517 "compare": false, 00:11:36.517 "compare_and_write": false, 00:11:36.517 "abort": false, 00:11:36.517 "seek_hole": false, 00:11:36.517 "seek_data": false, 00:11:36.517 "copy": false, 00:11:36.517 "nvme_iov_md": false 00:11:36.517 }, 00:11:36.517 "memory_domains": [ 00:11:36.517 { 00:11:36.517 "dma_device_id": "system", 00:11:36.517 "dma_device_type": 1 00:11:36.517 }, 00:11:36.517 { 00:11:36.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.517 "dma_device_type": 2 00:11:36.517 }, 00:11:36.517 { 00:11:36.517 "dma_device_id": "system", 00:11:36.517 "dma_device_type": 1 00:11:36.518 }, 00:11:36.518 { 00:11:36.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.518 "dma_device_type": 2 00:11:36.518 }, 00:11:36.518 { 00:11:36.518 "dma_device_id": "system", 00:11:36.518 "dma_device_type": 1 00:11:36.518 }, 00:11:36.518 { 00:11:36.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.518 "dma_device_type": 2 00:11:36.518 }, 00:11:36.518 { 00:11:36.518 "dma_device_id": "system", 00:11:36.518 "dma_device_type": 1 00:11:36.518 }, 00:11:36.518 { 00:11:36.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.518 "dma_device_type": 2 00:11:36.518 } 00:11:36.518 ], 00:11:36.518 "driver_specific": { 00:11:36.518 "raid": { 00:11:36.518 "uuid": "9016f68b-61e1-488b-8a33-4840b84dbd86", 00:11:36.518 "strip_size_kb": 0, 00:11:36.518 "state": "online", 00:11:36.518 "raid_level": "raid1", 00:11:36.518 "superblock": false, 00:11:36.518 "num_base_bdevs": 4, 00:11:36.518 "num_base_bdevs_discovered": 4, 00:11:36.518 "num_base_bdevs_operational": 4, 00:11:36.518 "base_bdevs_list": [ 00:11:36.518 { 00:11:36.518 "name": "BaseBdev1", 00:11:36.518 "uuid": "205fd18e-afa1-40e8-847d-cb363eac1121", 00:11:36.518 "is_configured": true, 00:11:36.518 "data_offset": 0, 00:11:36.518 "data_size": 65536 00:11:36.518 }, 00:11:36.518 { 00:11:36.518 "name": "BaseBdev2", 00:11:36.518 "uuid": "7462561b-7c9f-4fda-b9ee-48a95b5771ba", 00:11:36.518 "is_configured": true, 00:11:36.518 "data_offset": 0, 00:11:36.518 "data_size": 65536 00:11:36.518 }, 00:11:36.518 { 00:11:36.518 "name": "BaseBdev3", 00:11:36.518 "uuid": "e3fff3bc-eaea-4d7e-8449-ad0da9ab4b19", 00:11:36.518 "is_configured": true, 00:11:36.518 "data_offset": 0, 00:11:36.518 "data_size": 65536 00:11:36.518 }, 00:11:36.518 { 00:11:36.518 "name": "BaseBdev4", 00:11:36.518 "uuid": "e23c48cd-70a7-46e5-8e85-a27eca086f06", 00:11:36.518 "is_configured": true, 00:11:36.518 "data_offset": 0, 00:11:36.518 "data_size": 65536 00:11:36.518 } 00:11:36.518 ] 00:11:36.518 } 00:11:36.518 } 00:11:36.518 }' 00:11:36.518 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:36.518 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:36.518 BaseBdev2 00:11:36.518 BaseBdev3 00:11:36.518 BaseBdev4' 00:11:36.518 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.518 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:36.518 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.518 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:36.518 03:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.518 03:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.518 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.518 03:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.518 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.518 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.518 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.518 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.518 03:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:36.518 03:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.518 03:20:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.518 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.518 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.518 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.518 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.518 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.518 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:36.518 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.518 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.518 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.518 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.518 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.518 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.518 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:36.518 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.518 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.518 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.518 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.777 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.777 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.778 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:36.778 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.778 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.778 [2024-11-21 03:20:24.110147] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:36.778 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.778 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:36.778 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:36.778 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:36.778 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:36.778 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:36.778 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:36.778 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.778 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.778 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.778 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.778 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.778 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.778 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.778 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.778 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.778 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.778 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.778 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.778 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.778 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.778 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.778 "name": "Existed_Raid", 00:11:36.778 "uuid": "9016f68b-61e1-488b-8a33-4840b84dbd86", 00:11:36.778 "strip_size_kb": 0, 00:11:36.778 "state": "online", 00:11:36.778 "raid_level": "raid1", 00:11:36.778 "superblock": false, 00:11:36.778 "num_base_bdevs": 4, 00:11:36.778 "num_base_bdevs_discovered": 3, 00:11:36.778 "num_base_bdevs_operational": 3, 00:11:36.778 "base_bdevs_list": [ 00:11:36.778 { 00:11:36.778 "name": null, 00:11:36.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.778 "is_configured": false, 00:11:36.778 "data_offset": 0, 00:11:36.778 "data_size": 65536 00:11:36.778 }, 00:11:36.778 { 00:11:36.778 "name": "BaseBdev2", 00:11:36.778 "uuid": "7462561b-7c9f-4fda-b9ee-48a95b5771ba", 00:11:36.778 "is_configured": true, 00:11:36.778 "data_offset": 0, 00:11:36.778 "data_size": 65536 00:11:36.778 }, 00:11:36.778 { 00:11:36.778 "name": "BaseBdev3", 00:11:36.778 "uuid": "e3fff3bc-eaea-4d7e-8449-ad0da9ab4b19", 00:11:36.778 "is_configured": true, 00:11:36.778 "data_offset": 0, 00:11:36.778 "data_size": 65536 00:11:36.778 }, 00:11:36.778 { 00:11:36.778 "name": "BaseBdev4", 00:11:36.778 "uuid": "e23c48cd-70a7-46e5-8e85-a27eca086f06", 00:11:36.778 "is_configured": true, 00:11:36.778 "data_offset": 0, 00:11:36.778 "data_size": 65536 00:11:36.778 } 00:11:36.778 ] 00:11:36.778 }' 00:11:36.778 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.778 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.038 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:37.038 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.038 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.038 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.038 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:37.038 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.298 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.298 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:37.298 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:37.298 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:37.298 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.298 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.298 [2024-11-21 03:20:24.638270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:37.298 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.298 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:37.298 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.298 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.298 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.298 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.298 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:37.298 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.298 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:37.298 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:37.298 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:37.299 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.299 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.299 [2024-11-21 03:20:24.706215] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:37.299 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.299 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:37.299 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.299 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.299 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:37.299 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.299 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.299 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.299 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:37.299 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:37.299 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:37.299 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.299 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.299 [2024-11-21 03:20:24.774157] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:37.299 [2024-11-21 03:20:24.774275] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:37.299 [2024-11-21 03:20:24.786185] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:37.299 [2024-11-21 03:20:24.786242] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:37.299 [2024-11-21 03:20:24.786256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:11:37.299 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.299 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:37.299 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.299 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:37.299 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.299 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.299 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.299 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.299 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:37.299 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:37.299 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:37.299 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:37.299 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:37.299 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:37.299 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.299 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.559 BaseBdev2 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.559 [ 00:11:37.559 { 00:11:37.559 "name": "BaseBdev2", 00:11:37.559 "aliases": [ 00:11:37.559 "e075366f-045c-42bf-932f-b71bd95c96b8" 00:11:37.559 ], 00:11:37.559 "product_name": "Malloc disk", 00:11:37.559 "block_size": 512, 00:11:37.559 "num_blocks": 65536, 00:11:37.559 "uuid": "e075366f-045c-42bf-932f-b71bd95c96b8", 00:11:37.559 "assigned_rate_limits": { 00:11:37.559 "rw_ios_per_sec": 0, 00:11:37.559 "rw_mbytes_per_sec": 0, 00:11:37.559 "r_mbytes_per_sec": 0, 00:11:37.559 "w_mbytes_per_sec": 0 00:11:37.559 }, 00:11:37.559 "claimed": false, 00:11:37.559 "zoned": false, 00:11:37.559 "supported_io_types": { 00:11:37.559 "read": true, 00:11:37.559 "write": true, 00:11:37.559 "unmap": true, 00:11:37.559 "flush": true, 00:11:37.559 "reset": true, 00:11:37.559 "nvme_admin": false, 00:11:37.559 "nvme_io": false, 00:11:37.559 "nvme_io_md": false, 00:11:37.559 "write_zeroes": true, 00:11:37.559 "zcopy": true, 00:11:37.559 "get_zone_info": false, 00:11:37.559 "zone_management": false, 00:11:37.559 "zone_append": false, 00:11:37.559 "compare": false, 00:11:37.559 "compare_and_write": false, 00:11:37.559 "abort": true, 00:11:37.559 "seek_hole": false, 00:11:37.559 "seek_data": false, 00:11:37.559 "copy": true, 00:11:37.559 "nvme_iov_md": false 00:11:37.559 }, 00:11:37.559 "memory_domains": [ 00:11:37.559 { 00:11:37.559 "dma_device_id": "system", 00:11:37.559 "dma_device_type": 1 00:11:37.559 }, 00:11:37.559 { 00:11:37.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.559 "dma_device_type": 2 00:11:37.559 } 00:11:37.559 ], 00:11:37.559 "driver_specific": {} 00:11:37.559 } 00:11:37.559 ] 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.559 BaseBdev3 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.559 [ 00:11:37.559 { 00:11:37.559 "name": "BaseBdev3", 00:11:37.559 "aliases": [ 00:11:37.559 "30f1b815-86b6-40ad-9d4f-880cdcf8c542" 00:11:37.559 ], 00:11:37.559 "product_name": "Malloc disk", 00:11:37.559 "block_size": 512, 00:11:37.559 "num_blocks": 65536, 00:11:37.559 "uuid": "30f1b815-86b6-40ad-9d4f-880cdcf8c542", 00:11:37.559 "assigned_rate_limits": { 00:11:37.559 "rw_ios_per_sec": 0, 00:11:37.559 "rw_mbytes_per_sec": 0, 00:11:37.559 "r_mbytes_per_sec": 0, 00:11:37.559 "w_mbytes_per_sec": 0 00:11:37.559 }, 00:11:37.559 "claimed": false, 00:11:37.559 "zoned": false, 00:11:37.559 "supported_io_types": { 00:11:37.559 "read": true, 00:11:37.559 "write": true, 00:11:37.559 "unmap": true, 00:11:37.559 "flush": true, 00:11:37.559 "reset": true, 00:11:37.559 "nvme_admin": false, 00:11:37.559 "nvme_io": false, 00:11:37.559 "nvme_io_md": false, 00:11:37.559 "write_zeroes": true, 00:11:37.559 "zcopy": true, 00:11:37.559 "get_zone_info": false, 00:11:37.559 "zone_management": false, 00:11:37.559 "zone_append": false, 00:11:37.559 "compare": false, 00:11:37.559 "compare_and_write": false, 00:11:37.559 "abort": true, 00:11:37.559 "seek_hole": false, 00:11:37.559 "seek_data": false, 00:11:37.559 "copy": true, 00:11:37.559 "nvme_iov_md": false 00:11:37.559 }, 00:11:37.559 "memory_domains": [ 00:11:37.559 { 00:11:37.559 "dma_device_id": "system", 00:11:37.559 "dma_device_type": 1 00:11:37.559 }, 00:11:37.559 { 00:11:37.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.559 "dma_device_type": 2 00:11:37.559 } 00:11:37.559 ], 00:11:37.559 "driver_specific": {} 00:11:37.559 } 00:11:37.559 ] 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:37.559 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:37.560 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.560 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.560 BaseBdev4 00:11:37.560 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.560 03:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:37.560 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:37.560 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:37.560 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:37.560 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:37.560 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:37.560 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:37.560 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.560 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.560 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.560 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:37.560 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.560 03:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.560 [ 00:11:37.560 { 00:11:37.560 "name": "BaseBdev4", 00:11:37.560 "aliases": [ 00:11:37.560 "07131f2c-30ab-4d06-8810-41f6dceff87c" 00:11:37.560 ], 00:11:37.560 "product_name": "Malloc disk", 00:11:37.560 "block_size": 512, 00:11:37.560 "num_blocks": 65536, 00:11:37.560 "uuid": "07131f2c-30ab-4d06-8810-41f6dceff87c", 00:11:37.560 "assigned_rate_limits": { 00:11:37.560 "rw_ios_per_sec": 0, 00:11:37.560 "rw_mbytes_per_sec": 0, 00:11:37.560 "r_mbytes_per_sec": 0, 00:11:37.560 "w_mbytes_per_sec": 0 00:11:37.560 }, 00:11:37.560 "claimed": false, 00:11:37.560 "zoned": false, 00:11:37.560 "supported_io_types": { 00:11:37.560 "read": true, 00:11:37.560 "write": true, 00:11:37.560 "unmap": true, 00:11:37.560 "flush": true, 00:11:37.560 "reset": true, 00:11:37.560 "nvme_admin": false, 00:11:37.560 "nvme_io": false, 00:11:37.560 "nvme_io_md": false, 00:11:37.560 "write_zeroes": true, 00:11:37.560 "zcopy": true, 00:11:37.560 "get_zone_info": false, 00:11:37.560 "zone_management": false, 00:11:37.560 "zone_append": false, 00:11:37.560 "compare": false, 00:11:37.560 "compare_and_write": false, 00:11:37.560 "abort": true, 00:11:37.560 "seek_hole": false, 00:11:37.560 "seek_data": false, 00:11:37.560 "copy": true, 00:11:37.560 "nvme_iov_md": false 00:11:37.560 }, 00:11:37.560 "memory_domains": [ 00:11:37.560 { 00:11:37.560 "dma_device_id": "system", 00:11:37.560 "dma_device_type": 1 00:11:37.560 }, 00:11:37.560 { 00:11:37.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.560 "dma_device_type": 2 00:11:37.560 } 00:11:37.560 ], 00:11:37.560 "driver_specific": {} 00:11:37.560 } 00:11:37.560 ] 00:11:37.560 03:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.560 03:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:37.560 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:37.560 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:37.560 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:37.560 03:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.560 03:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.560 [2024-11-21 03:20:25.009878] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:37.560 [2024-11-21 03:20:25.009936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:37.560 [2024-11-21 03:20:25.009964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:37.560 [2024-11-21 03:20:25.012235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:37.560 [2024-11-21 03:20:25.012298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:37.560 03:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.560 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:37.560 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.560 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.560 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.560 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.560 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.560 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.560 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.560 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.560 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.560 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.560 03:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.560 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.560 03:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.560 03:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.560 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.560 "name": "Existed_Raid", 00:11:37.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.560 "strip_size_kb": 0, 00:11:37.560 "state": "configuring", 00:11:37.560 "raid_level": "raid1", 00:11:37.560 "superblock": false, 00:11:37.560 "num_base_bdevs": 4, 00:11:37.560 "num_base_bdevs_discovered": 3, 00:11:37.560 "num_base_bdevs_operational": 4, 00:11:37.560 "base_bdevs_list": [ 00:11:37.560 { 00:11:37.560 "name": "BaseBdev1", 00:11:37.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.560 "is_configured": false, 00:11:37.560 "data_offset": 0, 00:11:37.560 "data_size": 0 00:11:37.560 }, 00:11:37.560 { 00:11:37.560 "name": "BaseBdev2", 00:11:37.560 "uuid": "e075366f-045c-42bf-932f-b71bd95c96b8", 00:11:37.560 "is_configured": true, 00:11:37.560 "data_offset": 0, 00:11:37.560 "data_size": 65536 00:11:37.560 }, 00:11:37.560 { 00:11:37.560 "name": "BaseBdev3", 00:11:37.560 "uuid": "30f1b815-86b6-40ad-9d4f-880cdcf8c542", 00:11:37.560 "is_configured": true, 00:11:37.560 "data_offset": 0, 00:11:37.560 "data_size": 65536 00:11:37.560 }, 00:11:37.560 { 00:11:37.560 "name": "BaseBdev4", 00:11:37.560 "uuid": "07131f2c-30ab-4d06-8810-41f6dceff87c", 00:11:37.560 "is_configured": true, 00:11:37.560 "data_offset": 0, 00:11:37.560 "data_size": 65536 00:11:37.560 } 00:11:37.560 ] 00:11:37.560 }' 00:11:37.560 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.560 03:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.128 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:38.128 03:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.128 03:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.128 [2024-11-21 03:20:25.469984] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:38.128 03:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.128 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:38.128 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.128 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.128 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.128 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.128 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.128 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.128 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.128 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.128 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.128 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.128 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.128 03:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.128 03:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.128 03:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.128 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.128 "name": "Existed_Raid", 00:11:38.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.128 "strip_size_kb": 0, 00:11:38.128 "state": "configuring", 00:11:38.128 "raid_level": "raid1", 00:11:38.128 "superblock": false, 00:11:38.128 "num_base_bdevs": 4, 00:11:38.128 "num_base_bdevs_discovered": 2, 00:11:38.128 "num_base_bdevs_operational": 4, 00:11:38.128 "base_bdevs_list": [ 00:11:38.128 { 00:11:38.128 "name": "BaseBdev1", 00:11:38.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.128 "is_configured": false, 00:11:38.128 "data_offset": 0, 00:11:38.128 "data_size": 0 00:11:38.128 }, 00:11:38.128 { 00:11:38.128 "name": null, 00:11:38.128 "uuid": "e075366f-045c-42bf-932f-b71bd95c96b8", 00:11:38.128 "is_configured": false, 00:11:38.128 "data_offset": 0, 00:11:38.128 "data_size": 65536 00:11:38.128 }, 00:11:38.128 { 00:11:38.128 "name": "BaseBdev3", 00:11:38.128 "uuid": "30f1b815-86b6-40ad-9d4f-880cdcf8c542", 00:11:38.128 "is_configured": true, 00:11:38.128 "data_offset": 0, 00:11:38.128 "data_size": 65536 00:11:38.128 }, 00:11:38.128 { 00:11:38.128 "name": "BaseBdev4", 00:11:38.128 "uuid": "07131f2c-30ab-4d06-8810-41f6dceff87c", 00:11:38.128 "is_configured": true, 00:11:38.128 "data_offset": 0, 00:11:38.128 "data_size": 65536 00:11:38.128 } 00:11:38.128 ] 00:11:38.128 }' 00:11:38.128 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.128 03:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.386 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.386 03:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.386 03:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.386 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:38.646 03:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.646 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:38.646 03:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:38.646 03:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.646 03:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.646 [2024-11-21 03:20:26.001584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:38.646 BaseBdev1 00:11:38.646 03:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.646 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:38.646 03:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:38.646 03:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:38.646 03:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:38.646 03:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:38.646 03:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:38.646 03:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:38.646 03:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.646 03:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.646 03:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.646 03:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:38.646 03:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.646 03:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.646 [ 00:11:38.646 { 00:11:38.646 "name": "BaseBdev1", 00:11:38.647 "aliases": [ 00:11:38.647 "f53d5bb5-1d8f-49c0-b8a7-362753f1b768" 00:11:38.647 ], 00:11:38.647 "product_name": "Malloc disk", 00:11:38.647 "block_size": 512, 00:11:38.647 "num_blocks": 65536, 00:11:38.647 "uuid": "f53d5bb5-1d8f-49c0-b8a7-362753f1b768", 00:11:38.647 "assigned_rate_limits": { 00:11:38.647 "rw_ios_per_sec": 0, 00:11:38.647 "rw_mbytes_per_sec": 0, 00:11:38.647 "r_mbytes_per_sec": 0, 00:11:38.647 "w_mbytes_per_sec": 0 00:11:38.647 }, 00:11:38.647 "claimed": true, 00:11:38.647 "claim_type": "exclusive_write", 00:11:38.647 "zoned": false, 00:11:38.647 "supported_io_types": { 00:11:38.647 "read": true, 00:11:38.647 "write": true, 00:11:38.647 "unmap": true, 00:11:38.647 "flush": true, 00:11:38.647 "reset": true, 00:11:38.647 "nvme_admin": false, 00:11:38.647 "nvme_io": false, 00:11:38.647 "nvme_io_md": false, 00:11:38.647 "write_zeroes": true, 00:11:38.647 "zcopy": true, 00:11:38.647 "get_zone_info": false, 00:11:38.647 "zone_management": false, 00:11:38.647 "zone_append": false, 00:11:38.647 "compare": false, 00:11:38.647 "compare_and_write": false, 00:11:38.647 "abort": true, 00:11:38.647 "seek_hole": false, 00:11:38.647 "seek_data": false, 00:11:38.647 "copy": true, 00:11:38.647 "nvme_iov_md": false 00:11:38.647 }, 00:11:38.647 "memory_domains": [ 00:11:38.647 { 00:11:38.647 "dma_device_id": "system", 00:11:38.647 "dma_device_type": 1 00:11:38.647 }, 00:11:38.647 { 00:11:38.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.647 "dma_device_type": 2 00:11:38.647 } 00:11:38.647 ], 00:11:38.647 "driver_specific": {} 00:11:38.647 } 00:11:38.647 ] 00:11:38.647 03:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.647 03:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:38.647 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:38.647 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.647 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.647 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.647 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.647 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.647 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.647 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.647 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.647 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.647 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.647 03:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.647 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.647 03:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.647 03:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.647 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.647 "name": "Existed_Raid", 00:11:38.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.647 "strip_size_kb": 0, 00:11:38.647 "state": "configuring", 00:11:38.647 "raid_level": "raid1", 00:11:38.647 "superblock": false, 00:11:38.647 "num_base_bdevs": 4, 00:11:38.647 "num_base_bdevs_discovered": 3, 00:11:38.647 "num_base_bdevs_operational": 4, 00:11:38.647 "base_bdevs_list": [ 00:11:38.647 { 00:11:38.647 "name": "BaseBdev1", 00:11:38.647 "uuid": "f53d5bb5-1d8f-49c0-b8a7-362753f1b768", 00:11:38.647 "is_configured": true, 00:11:38.647 "data_offset": 0, 00:11:38.647 "data_size": 65536 00:11:38.647 }, 00:11:38.647 { 00:11:38.647 "name": null, 00:11:38.647 "uuid": "e075366f-045c-42bf-932f-b71bd95c96b8", 00:11:38.647 "is_configured": false, 00:11:38.647 "data_offset": 0, 00:11:38.647 "data_size": 65536 00:11:38.647 }, 00:11:38.647 { 00:11:38.647 "name": "BaseBdev3", 00:11:38.647 "uuid": "30f1b815-86b6-40ad-9d4f-880cdcf8c542", 00:11:38.647 "is_configured": true, 00:11:38.647 "data_offset": 0, 00:11:38.647 "data_size": 65536 00:11:38.647 }, 00:11:38.647 { 00:11:38.647 "name": "BaseBdev4", 00:11:38.647 "uuid": "07131f2c-30ab-4d06-8810-41f6dceff87c", 00:11:38.647 "is_configured": true, 00:11:38.647 "data_offset": 0, 00:11:38.647 "data_size": 65536 00:11:38.647 } 00:11:38.647 ] 00:11:38.647 }' 00:11:38.647 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.647 03:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.227 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.227 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:39.227 03:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.227 03:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.227 03:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.227 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:39.227 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:39.227 03:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.227 03:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.227 [2024-11-21 03:20:26.549848] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:39.227 03:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.227 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:39.227 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.227 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.227 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.227 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.227 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.227 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.227 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.227 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.227 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.227 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.227 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.227 03:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.227 03:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.227 03:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.227 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.227 "name": "Existed_Raid", 00:11:39.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.227 "strip_size_kb": 0, 00:11:39.227 "state": "configuring", 00:11:39.227 "raid_level": "raid1", 00:11:39.227 "superblock": false, 00:11:39.227 "num_base_bdevs": 4, 00:11:39.227 "num_base_bdevs_discovered": 2, 00:11:39.227 "num_base_bdevs_operational": 4, 00:11:39.227 "base_bdevs_list": [ 00:11:39.227 { 00:11:39.227 "name": "BaseBdev1", 00:11:39.227 "uuid": "f53d5bb5-1d8f-49c0-b8a7-362753f1b768", 00:11:39.227 "is_configured": true, 00:11:39.227 "data_offset": 0, 00:11:39.227 "data_size": 65536 00:11:39.227 }, 00:11:39.227 { 00:11:39.227 "name": null, 00:11:39.227 "uuid": "e075366f-045c-42bf-932f-b71bd95c96b8", 00:11:39.227 "is_configured": false, 00:11:39.227 "data_offset": 0, 00:11:39.227 "data_size": 65536 00:11:39.227 }, 00:11:39.227 { 00:11:39.227 "name": null, 00:11:39.227 "uuid": "30f1b815-86b6-40ad-9d4f-880cdcf8c542", 00:11:39.227 "is_configured": false, 00:11:39.227 "data_offset": 0, 00:11:39.227 "data_size": 65536 00:11:39.227 }, 00:11:39.227 { 00:11:39.227 "name": "BaseBdev4", 00:11:39.227 "uuid": "07131f2c-30ab-4d06-8810-41f6dceff87c", 00:11:39.227 "is_configured": true, 00:11:39.227 "data_offset": 0, 00:11:39.227 "data_size": 65536 00:11:39.227 } 00:11:39.227 ] 00:11:39.227 }' 00:11:39.227 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.227 03:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.487 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.487 03:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:39.487 03:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.487 03:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.487 03:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.487 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:39.487 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:39.487 03:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.487 03:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.487 [2024-11-21 03:20:27.038109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:39.487 03:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.487 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:39.487 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.487 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.487 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.487 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.487 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.487 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.487 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.487 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.487 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.487 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.487 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.487 03:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.487 03:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.746 03:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.746 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.746 "name": "Existed_Raid", 00:11:39.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.746 "strip_size_kb": 0, 00:11:39.746 "state": "configuring", 00:11:39.746 "raid_level": "raid1", 00:11:39.746 "superblock": false, 00:11:39.746 "num_base_bdevs": 4, 00:11:39.746 "num_base_bdevs_discovered": 3, 00:11:39.746 "num_base_bdevs_operational": 4, 00:11:39.746 "base_bdevs_list": [ 00:11:39.746 { 00:11:39.746 "name": "BaseBdev1", 00:11:39.746 "uuid": "f53d5bb5-1d8f-49c0-b8a7-362753f1b768", 00:11:39.746 "is_configured": true, 00:11:39.746 "data_offset": 0, 00:11:39.746 "data_size": 65536 00:11:39.746 }, 00:11:39.746 { 00:11:39.746 "name": null, 00:11:39.746 "uuid": "e075366f-045c-42bf-932f-b71bd95c96b8", 00:11:39.746 "is_configured": false, 00:11:39.746 "data_offset": 0, 00:11:39.746 "data_size": 65536 00:11:39.746 }, 00:11:39.746 { 00:11:39.746 "name": "BaseBdev3", 00:11:39.746 "uuid": "30f1b815-86b6-40ad-9d4f-880cdcf8c542", 00:11:39.746 "is_configured": true, 00:11:39.746 "data_offset": 0, 00:11:39.746 "data_size": 65536 00:11:39.746 }, 00:11:39.746 { 00:11:39.746 "name": "BaseBdev4", 00:11:39.746 "uuid": "07131f2c-30ab-4d06-8810-41f6dceff87c", 00:11:39.746 "is_configured": true, 00:11:39.746 "data_offset": 0, 00:11:39.746 "data_size": 65536 00:11:39.746 } 00:11:39.746 ] 00:11:39.746 }' 00:11:39.746 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.746 03:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.006 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:40.006 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.006 03:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.006 03:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.006 03:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.006 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:40.006 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:40.006 03:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.006 03:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.006 [2024-11-21 03:20:27.478230] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:40.006 03:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.006 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:40.006 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.006 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.006 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.006 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.006 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.006 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.006 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.006 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.006 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.006 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.006 03:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.006 03:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.006 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.006 03:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.006 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.006 "name": "Existed_Raid", 00:11:40.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.006 "strip_size_kb": 0, 00:11:40.006 "state": "configuring", 00:11:40.006 "raid_level": "raid1", 00:11:40.006 "superblock": false, 00:11:40.006 "num_base_bdevs": 4, 00:11:40.006 "num_base_bdevs_discovered": 2, 00:11:40.006 "num_base_bdevs_operational": 4, 00:11:40.006 "base_bdevs_list": [ 00:11:40.006 { 00:11:40.006 "name": null, 00:11:40.006 "uuid": "f53d5bb5-1d8f-49c0-b8a7-362753f1b768", 00:11:40.006 "is_configured": false, 00:11:40.006 "data_offset": 0, 00:11:40.006 "data_size": 65536 00:11:40.006 }, 00:11:40.006 { 00:11:40.006 "name": null, 00:11:40.006 "uuid": "e075366f-045c-42bf-932f-b71bd95c96b8", 00:11:40.006 "is_configured": false, 00:11:40.006 "data_offset": 0, 00:11:40.006 "data_size": 65536 00:11:40.006 }, 00:11:40.006 { 00:11:40.006 "name": "BaseBdev3", 00:11:40.006 "uuid": "30f1b815-86b6-40ad-9d4f-880cdcf8c542", 00:11:40.006 "is_configured": true, 00:11:40.006 "data_offset": 0, 00:11:40.006 "data_size": 65536 00:11:40.006 }, 00:11:40.006 { 00:11:40.006 "name": "BaseBdev4", 00:11:40.006 "uuid": "07131f2c-30ab-4d06-8810-41f6dceff87c", 00:11:40.006 "is_configured": true, 00:11:40.006 "data_offset": 0, 00:11:40.006 "data_size": 65536 00:11:40.006 } 00:11:40.006 ] 00:11:40.006 }' 00:11:40.006 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.006 03:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.574 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.574 03:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.575 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:40.575 03:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.575 03:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.575 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:40.575 03:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:40.575 03:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.575 03:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.575 [2024-11-21 03:20:27.997095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:40.575 03:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.575 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:40.575 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.575 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.575 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.575 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.575 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.575 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.575 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.575 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.575 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.575 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.575 03:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.575 03:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.575 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.575 03:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.575 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.575 "name": "Existed_Raid", 00:11:40.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.575 "strip_size_kb": 0, 00:11:40.575 "state": "configuring", 00:11:40.575 "raid_level": "raid1", 00:11:40.575 "superblock": false, 00:11:40.575 "num_base_bdevs": 4, 00:11:40.575 "num_base_bdevs_discovered": 3, 00:11:40.575 "num_base_bdevs_operational": 4, 00:11:40.575 "base_bdevs_list": [ 00:11:40.575 { 00:11:40.575 "name": null, 00:11:40.575 "uuid": "f53d5bb5-1d8f-49c0-b8a7-362753f1b768", 00:11:40.575 "is_configured": false, 00:11:40.575 "data_offset": 0, 00:11:40.575 "data_size": 65536 00:11:40.575 }, 00:11:40.575 { 00:11:40.575 "name": "BaseBdev2", 00:11:40.575 "uuid": "e075366f-045c-42bf-932f-b71bd95c96b8", 00:11:40.575 "is_configured": true, 00:11:40.575 "data_offset": 0, 00:11:40.575 "data_size": 65536 00:11:40.575 }, 00:11:40.575 { 00:11:40.575 "name": "BaseBdev3", 00:11:40.575 "uuid": "30f1b815-86b6-40ad-9d4f-880cdcf8c542", 00:11:40.575 "is_configured": true, 00:11:40.575 "data_offset": 0, 00:11:40.575 "data_size": 65536 00:11:40.575 }, 00:11:40.575 { 00:11:40.575 "name": "BaseBdev4", 00:11:40.575 "uuid": "07131f2c-30ab-4d06-8810-41f6dceff87c", 00:11:40.575 "is_configured": true, 00:11:40.575 "data_offset": 0, 00:11:40.575 "data_size": 65536 00:11:40.575 } 00:11:40.575 ] 00:11:40.575 }' 00:11:40.575 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.575 03:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.143 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.143 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:41.143 03:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.143 03:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.143 03:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.143 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:41.143 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.143 03:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.143 03:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.143 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:41.143 03:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.143 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f53d5bb5-1d8f-49c0-b8a7-362753f1b768 00:11:41.143 03:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.143 03:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.143 [2024-11-21 03:20:28.552646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:41.143 [2024-11-21 03:20:28.552708] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:41.143 [2024-11-21 03:20:28.552717] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:41.143 [2024-11-21 03:20:28.553038] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:11:41.143 [2024-11-21 03:20:28.553186] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:41.143 [2024-11-21 03:20:28.553213] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:41.143 [2024-11-21 03:20:28.553421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.143 NewBaseBdev 00:11:41.143 03:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.143 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:41.143 03:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:41.143 03:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:41.143 03:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:41.143 03:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:41.143 03:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:41.143 03:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:41.143 03:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.143 03:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.143 03:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.143 03:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:41.143 03:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.143 03:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.143 [ 00:11:41.143 { 00:11:41.143 "name": "NewBaseBdev", 00:11:41.143 "aliases": [ 00:11:41.143 "f53d5bb5-1d8f-49c0-b8a7-362753f1b768" 00:11:41.143 ], 00:11:41.143 "product_name": "Malloc disk", 00:11:41.143 "block_size": 512, 00:11:41.143 "num_blocks": 65536, 00:11:41.143 "uuid": "f53d5bb5-1d8f-49c0-b8a7-362753f1b768", 00:11:41.143 "assigned_rate_limits": { 00:11:41.143 "rw_ios_per_sec": 0, 00:11:41.143 "rw_mbytes_per_sec": 0, 00:11:41.143 "r_mbytes_per_sec": 0, 00:11:41.143 "w_mbytes_per_sec": 0 00:11:41.143 }, 00:11:41.143 "claimed": true, 00:11:41.143 "claim_type": "exclusive_write", 00:11:41.143 "zoned": false, 00:11:41.143 "supported_io_types": { 00:11:41.143 "read": true, 00:11:41.143 "write": true, 00:11:41.143 "unmap": true, 00:11:41.143 "flush": true, 00:11:41.143 "reset": true, 00:11:41.143 "nvme_admin": false, 00:11:41.143 "nvme_io": false, 00:11:41.143 "nvme_io_md": false, 00:11:41.143 "write_zeroes": true, 00:11:41.143 "zcopy": true, 00:11:41.143 "get_zone_info": false, 00:11:41.143 "zone_management": false, 00:11:41.143 "zone_append": false, 00:11:41.143 "compare": false, 00:11:41.143 "compare_and_write": false, 00:11:41.143 "abort": true, 00:11:41.143 "seek_hole": false, 00:11:41.143 "seek_data": false, 00:11:41.143 "copy": true, 00:11:41.143 "nvme_iov_md": false 00:11:41.143 }, 00:11:41.143 "memory_domains": [ 00:11:41.143 { 00:11:41.143 "dma_device_id": "system", 00:11:41.143 "dma_device_type": 1 00:11:41.143 }, 00:11:41.143 { 00:11:41.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.143 "dma_device_type": 2 00:11:41.143 } 00:11:41.143 ], 00:11:41.143 "driver_specific": {} 00:11:41.143 } 00:11:41.143 ] 00:11:41.143 03:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.143 03:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:41.144 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:41.144 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.144 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.144 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.144 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.144 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.144 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.144 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.144 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.144 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.144 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.144 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.144 03:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.144 03:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.144 03:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.144 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.144 "name": "Existed_Raid", 00:11:41.144 "uuid": "ff443c92-c8e1-4b24-933e-b5bd1a4ee140", 00:11:41.144 "strip_size_kb": 0, 00:11:41.144 "state": "online", 00:11:41.144 "raid_level": "raid1", 00:11:41.144 "superblock": false, 00:11:41.144 "num_base_bdevs": 4, 00:11:41.144 "num_base_bdevs_discovered": 4, 00:11:41.144 "num_base_bdevs_operational": 4, 00:11:41.144 "base_bdevs_list": [ 00:11:41.144 { 00:11:41.144 "name": "NewBaseBdev", 00:11:41.144 "uuid": "f53d5bb5-1d8f-49c0-b8a7-362753f1b768", 00:11:41.144 "is_configured": true, 00:11:41.144 "data_offset": 0, 00:11:41.144 "data_size": 65536 00:11:41.144 }, 00:11:41.144 { 00:11:41.144 "name": "BaseBdev2", 00:11:41.144 "uuid": "e075366f-045c-42bf-932f-b71bd95c96b8", 00:11:41.144 "is_configured": true, 00:11:41.144 "data_offset": 0, 00:11:41.144 "data_size": 65536 00:11:41.144 }, 00:11:41.144 { 00:11:41.144 "name": "BaseBdev3", 00:11:41.144 "uuid": "30f1b815-86b6-40ad-9d4f-880cdcf8c542", 00:11:41.144 "is_configured": true, 00:11:41.144 "data_offset": 0, 00:11:41.144 "data_size": 65536 00:11:41.144 }, 00:11:41.144 { 00:11:41.144 "name": "BaseBdev4", 00:11:41.144 "uuid": "07131f2c-30ab-4d06-8810-41f6dceff87c", 00:11:41.144 "is_configured": true, 00:11:41.144 "data_offset": 0, 00:11:41.144 "data_size": 65536 00:11:41.144 } 00:11:41.144 ] 00:11:41.144 }' 00:11:41.144 03:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.144 03:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.713 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:41.713 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:41.713 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:41.713 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:41.713 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:41.713 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:41.713 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:41.713 03:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.713 03:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.713 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:41.713 [2024-11-21 03:20:29.065295] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:41.713 03:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.713 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:41.713 "name": "Existed_Raid", 00:11:41.713 "aliases": [ 00:11:41.713 "ff443c92-c8e1-4b24-933e-b5bd1a4ee140" 00:11:41.713 ], 00:11:41.713 "product_name": "Raid Volume", 00:11:41.713 "block_size": 512, 00:11:41.713 "num_blocks": 65536, 00:11:41.713 "uuid": "ff443c92-c8e1-4b24-933e-b5bd1a4ee140", 00:11:41.713 "assigned_rate_limits": { 00:11:41.713 "rw_ios_per_sec": 0, 00:11:41.713 "rw_mbytes_per_sec": 0, 00:11:41.713 "r_mbytes_per_sec": 0, 00:11:41.713 "w_mbytes_per_sec": 0 00:11:41.713 }, 00:11:41.713 "claimed": false, 00:11:41.713 "zoned": false, 00:11:41.713 "supported_io_types": { 00:11:41.713 "read": true, 00:11:41.713 "write": true, 00:11:41.713 "unmap": false, 00:11:41.713 "flush": false, 00:11:41.713 "reset": true, 00:11:41.713 "nvme_admin": false, 00:11:41.713 "nvme_io": false, 00:11:41.713 "nvme_io_md": false, 00:11:41.713 "write_zeroes": true, 00:11:41.713 "zcopy": false, 00:11:41.713 "get_zone_info": false, 00:11:41.713 "zone_management": false, 00:11:41.713 "zone_append": false, 00:11:41.713 "compare": false, 00:11:41.713 "compare_and_write": false, 00:11:41.713 "abort": false, 00:11:41.713 "seek_hole": false, 00:11:41.713 "seek_data": false, 00:11:41.713 "copy": false, 00:11:41.713 "nvme_iov_md": false 00:11:41.713 }, 00:11:41.713 "memory_domains": [ 00:11:41.713 { 00:11:41.713 "dma_device_id": "system", 00:11:41.713 "dma_device_type": 1 00:11:41.713 }, 00:11:41.713 { 00:11:41.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.713 "dma_device_type": 2 00:11:41.713 }, 00:11:41.713 { 00:11:41.713 "dma_device_id": "system", 00:11:41.713 "dma_device_type": 1 00:11:41.713 }, 00:11:41.713 { 00:11:41.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.713 "dma_device_type": 2 00:11:41.713 }, 00:11:41.713 { 00:11:41.713 "dma_device_id": "system", 00:11:41.713 "dma_device_type": 1 00:11:41.713 }, 00:11:41.713 { 00:11:41.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.713 "dma_device_type": 2 00:11:41.713 }, 00:11:41.713 { 00:11:41.713 "dma_device_id": "system", 00:11:41.713 "dma_device_type": 1 00:11:41.713 }, 00:11:41.713 { 00:11:41.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.713 "dma_device_type": 2 00:11:41.713 } 00:11:41.713 ], 00:11:41.713 "driver_specific": { 00:11:41.713 "raid": { 00:11:41.713 "uuid": "ff443c92-c8e1-4b24-933e-b5bd1a4ee140", 00:11:41.713 "strip_size_kb": 0, 00:11:41.713 "state": "online", 00:11:41.713 "raid_level": "raid1", 00:11:41.713 "superblock": false, 00:11:41.713 "num_base_bdevs": 4, 00:11:41.713 "num_base_bdevs_discovered": 4, 00:11:41.713 "num_base_bdevs_operational": 4, 00:11:41.713 "base_bdevs_list": [ 00:11:41.713 { 00:11:41.713 "name": "NewBaseBdev", 00:11:41.713 "uuid": "f53d5bb5-1d8f-49c0-b8a7-362753f1b768", 00:11:41.713 "is_configured": true, 00:11:41.713 "data_offset": 0, 00:11:41.713 "data_size": 65536 00:11:41.713 }, 00:11:41.713 { 00:11:41.713 "name": "BaseBdev2", 00:11:41.713 "uuid": "e075366f-045c-42bf-932f-b71bd95c96b8", 00:11:41.713 "is_configured": true, 00:11:41.713 "data_offset": 0, 00:11:41.713 "data_size": 65536 00:11:41.713 }, 00:11:41.713 { 00:11:41.713 "name": "BaseBdev3", 00:11:41.713 "uuid": "30f1b815-86b6-40ad-9d4f-880cdcf8c542", 00:11:41.713 "is_configured": true, 00:11:41.713 "data_offset": 0, 00:11:41.713 "data_size": 65536 00:11:41.713 }, 00:11:41.713 { 00:11:41.713 "name": "BaseBdev4", 00:11:41.713 "uuid": "07131f2c-30ab-4d06-8810-41f6dceff87c", 00:11:41.713 "is_configured": true, 00:11:41.713 "data_offset": 0, 00:11:41.713 "data_size": 65536 00:11:41.713 } 00:11:41.713 ] 00:11:41.713 } 00:11:41.713 } 00:11:41.713 }' 00:11:41.713 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:41.713 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:41.713 BaseBdev2 00:11:41.713 BaseBdev3 00:11:41.713 BaseBdev4' 00:11:41.713 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.713 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:41.713 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.713 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:41.713 03:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.713 03:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.713 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.713 03:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.713 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.713 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.713 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.714 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.714 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:41.714 03:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.714 03:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.714 03:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.972 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.972 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.972 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.972 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:41.972 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.972 03:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.972 03:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.972 03:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.972 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.972 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.972 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.972 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:41.972 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.972 03:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.972 03:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.972 03:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.972 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.972 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.972 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:41.972 03:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.972 03:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.972 [2024-11-21 03:20:29.412997] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:41.972 [2024-11-21 03:20:29.413120] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:41.972 [2024-11-21 03:20:29.413255] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:41.972 [2024-11-21 03:20:29.413600] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:41.972 [2024-11-21 03:20:29.413665] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:41.972 03:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.972 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 86030 00:11:41.972 03:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 86030 ']' 00:11:41.972 03:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 86030 00:11:41.972 03:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:41.972 03:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:41.972 03:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86030 00:11:41.972 03:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:41.972 03:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:41.972 03:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86030' 00:11:41.972 killing process with pid 86030 00:11:41.973 03:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 86030 00:11:41.973 [2024-11-21 03:20:29.463511] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:41.973 03:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 86030 00:11:41.973 [2024-11-21 03:20:29.507371] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:42.231 03:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:42.231 00:11:42.231 real 0m9.872s 00:11:42.231 user 0m16.817s 00:11:42.231 sys 0m2.211s 00:11:42.231 03:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.231 03:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.231 ************************************ 00:11:42.231 END TEST raid_state_function_test 00:11:42.231 ************************************ 00:11:42.489 03:20:29 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:42.489 03:20:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:42.489 03:20:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.489 03:20:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:42.489 ************************************ 00:11:42.489 START TEST raid_state_function_test_sb 00:11:42.489 ************************************ 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=86685 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86685' 00:11:42.489 Process raid pid: 86685 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 86685 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 86685 ']' 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:42.489 03:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.489 [2024-11-21 03:20:29.926397] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:11:42.489 [2024-11-21 03:20:29.927143] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.747 [2024-11-21 03:20:30.070643] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:42.747 [2024-11-21 03:20:30.110950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.747 [2024-11-21 03:20:30.143070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.747 [2024-11-21 03:20:30.188808] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:42.747 [2024-11-21 03:20:30.188945] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:43.313 03:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:43.313 03:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:43.313 03:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:43.313 03:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.313 03:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.313 [2024-11-21 03:20:30.849279] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:43.313 [2024-11-21 03:20:30.849420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:43.313 [2024-11-21 03:20:30.849442] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:43.313 [2024-11-21 03:20:30.849453] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:43.313 [2024-11-21 03:20:30.849465] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:43.313 [2024-11-21 03:20:30.849475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:43.314 [2024-11-21 03:20:30.849486] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:43.314 [2024-11-21 03:20:30.849495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:43.314 03:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.314 03:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:43.314 03:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.314 03:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.314 03:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.314 03:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.314 03:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.314 03:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.314 03:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.314 03:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.314 03:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.314 03:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.314 03:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.314 03:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.314 03:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.572 03:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.572 03:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.572 "name": "Existed_Raid", 00:11:43.572 "uuid": "0e4d73df-905d-45c3-9f4b-5c16814fc0a7", 00:11:43.572 "strip_size_kb": 0, 00:11:43.572 "state": "configuring", 00:11:43.572 "raid_level": "raid1", 00:11:43.572 "superblock": true, 00:11:43.572 "num_base_bdevs": 4, 00:11:43.572 "num_base_bdevs_discovered": 0, 00:11:43.572 "num_base_bdevs_operational": 4, 00:11:43.572 "base_bdevs_list": [ 00:11:43.572 { 00:11:43.572 "name": "BaseBdev1", 00:11:43.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.572 "is_configured": false, 00:11:43.572 "data_offset": 0, 00:11:43.572 "data_size": 0 00:11:43.572 }, 00:11:43.572 { 00:11:43.572 "name": "BaseBdev2", 00:11:43.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.572 "is_configured": false, 00:11:43.572 "data_offset": 0, 00:11:43.572 "data_size": 0 00:11:43.572 }, 00:11:43.572 { 00:11:43.572 "name": "BaseBdev3", 00:11:43.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.572 "is_configured": false, 00:11:43.572 "data_offset": 0, 00:11:43.572 "data_size": 0 00:11:43.572 }, 00:11:43.572 { 00:11:43.572 "name": "BaseBdev4", 00:11:43.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.572 "is_configured": false, 00:11:43.572 "data_offset": 0, 00:11:43.572 "data_size": 0 00:11:43.572 } 00:11:43.572 ] 00:11:43.572 }' 00:11:43.572 03:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.572 03:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.830 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:43.830 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.830 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.830 [2024-11-21 03:20:31.317304] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:43.830 [2024-11-21 03:20:31.317416] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:11:43.830 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.830 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:43.830 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.830 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.830 [2024-11-21 03:20:31.329364] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:43.830 [2024-11-21 03:20:31.329466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:43.830 [2024-11-21 03:20:31.329503] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:43.830 [2024-11-21 03:20:31.329531] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:43.830 [2024-11-21 03:20:31.329556] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:43.830 [2024-11-21 03:20:31.329580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:43.830 [2024-11-21 03:20:31.329604] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:43.830 [2024-11-21 03:20:31.329628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:43.831 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.831 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:43.831 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.831 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.831 [2024-11-21 03:20:31.351092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:43.831 BaseBdev1 00:11:43.831 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.831 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:43.831 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:43.831 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:43.831 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:43.831 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:43.831 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:43.831 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:43.831 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.831 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.831 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.831 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:43.831 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.831 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.831 [ 00:11:43.831 { 00:11:43.831 "name": "BaseBdev1", 00:11:43.831 "aliases": [ 00:11:43.831 "b7176c31-dc3c-44ba-aac5-27fabae9c7ae" 00:11:43.831 ], 00:11:43.831 "product_name": "Malloc disk", 00:11:43.831 "block_size": 512, 00:11:43.831 "num_blocks": 65536, 00:11:43.831 "uuid": "b7176c31-dc3c-44ba-aac5-27fabae9c7ae", 00:11:43.831 "assigned_rate_limits": { 00:11:43.831 "rw_ios_per_sec": 0, 00:11:43.831 "rw_mbytes_per_sec": 0, 00:11:43.831 "r_mbytes_per_sec": 0, 00:11:43.831 "w_mbytes_per_sec": 0 00:11:43.831 }, 00:11:43.831 "claimed": true, 00:11:43.831 "claim_type": "exclusive_write", 00:11:43.831 "zoned": false, 00:11:43.831 "supported_io_types": { 00:11:43.831 "read": true, 00:11:43.831 "write": true, 00:11:43.831 "unmap": true, 00:11:43.831 "flush": true, 00:11:43.831 "reset": true, 00:11:43.831 "nvme_admin": false, 00:11:43.831 "nvme_io": false, 00:11:43.831 "nvme_io_md": false, 00:11:43.831 "write_zeroes": true, 00:11:43.831 "zcopy": true, 00:11:43.831 "get_zone_info": false, 00:11:43.831 "zone_management": false, 00:11:43.831 "zone_append": false, 00:11:43.831 "compare": false, 00:11:43.831 "compare_and_write": false, 00:11:43.831 "abort": true, 00:11:43.831 "seek_hole": false, 00:11:43.831 "seek_data": false, 00:11:43.831 "copy": true, 00:11:43.831 "nvme_iov_md": false 00:11:43.831 }, 00:11:43.831 "memory_domains": [ 00:11:43.831 { 00:11:43.831 "dma_device_id": "system", 00:11:43.831 "dma_device_type": 1 00:11:43.831 }, 00:11:43.831 { 00:11:43.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.831 "dma_device_type": 2 00:11:43.831 } 00:11:43.831 ], 00:11:43.831 "driver_specific": {} 00:11:43.831 } 00:11:43.831 ] 00:11:43.831 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.831 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:43.831 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:43.831 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.831 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.831 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.831 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.831 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.831 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.831 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.831 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.831 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.089 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.089 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.089 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.089 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.089 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.089 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.089 "name": "Existed_Raid", 00:11:44.089 "uuid": "ec2fea89-9e3c-4aef-8532-9abaa60f973e", 00:11:44.089 "strip_size_kb": 0, 00:11:44.089 "state": "configuring", 00:11:44.089 "raid_level": "raid1", 00:11:44.089 "superblock": true, 00:11:44.089 "num_base_bdevs": 4, 00:11:44.089 "num_base_bdevs_discovered": 1, 00:11:44.089 "num_base_bdevs_operational": 4, 00:11:44.089 "base_bdevs_list": [ 00:11:44.089 { 00:11:44.089 "name": "BaseBdev1", 00:11:44.089 "uuid": "b7176c31-dc3c-44ba-aac5-27fabae9c7ae", 00:11:44.089 "is_configured": true, 00:11:44.089 "data_offset": 2048, 00:11:44.089 "data_size": 63488 00:11:44.089 }, 00:11:44.089 { 00:11:44.089 "name": "BaseBdev2", 00:11:44.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.089 "is_configured": false, 00:11:44.089 "data_offset": 0, 00:11:44.089 "data_size": 0 00:11:44.089 }, 00:11:44.089 { 00:11:44.089 "name": "BaseBdev3", 00:11:44.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.089 "is_configured": false, 00:11:44.089 "data_offset": 0, 00:11:44.089 "data_size": 0 00:11:44.089 }, 00:11:44.089 { 00:11:44.089 "name": "BaseBdev4", 00:11:44.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.089 "is_configured": false, 00:11:44.089 "data_offset": 0, 00:11:44.089 "data_size": 0 00:11:44.089 } 00:11:44.089 ] 00:11:44.089 }' 00:11:44.089 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.089 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.348 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:44.348 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.348 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.348 [2024-11-21 03:20:31.867320] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:44.348 [2024-11-21 03:20:31.867401] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:44.348 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.348 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:44.348 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.348 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.348 [2024-11-21 03:20:31.875398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:44.348 [2024-11-21 03:20:31.877746] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:44.348 [2024-11-21 03:20:31.877866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:44.348 [2024-11-21 03:20:31.877904] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:44.348 [2024-11-21 03:20:31.877934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:44.348 [2024-11-21 03:20:31.877959] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:44.348 [2024-11-21 03:20:31.877984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:44.348 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.348 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:44.348 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:44.348 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:44.348 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.348 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:44.348 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.348 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.348 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.348 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.348 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.348 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.348 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.348 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.348 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.348 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.348 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.348 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.607 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.607 "name": "Existed_Raid", 00:11:44.607 "uuid": "ffd2ffe2-4a9b-4ee5-ae0b-a73aea38019c", 00:11:44.607 "strip_size_kb": 0, 00:11:44.607 "state": "configuring", 00:11:44.607 "raid_level": "raid1", 00:11:44.607 "superblock": true, 00:11:44.607 "num_base_bdevs": 4, 00:11:44.607 "num_base_bdevs_discovered": 1, 00:11:44.607 "num_base_bdevs_operational": 4, 00:11:44.607 "base_bdevs_list": [ 00:11:44.607 { 00:11:44.607 "name": "BaseBdev1", 00:11:44.607 "uuid": "b7176c31-dc3c-44ba-aac5-27fabae9c7ae", 00:11:44.607 "is_configured": true, 00:11:44.607 "data_offset": 2048, 00:11:44.607 "data_size": 63488 00:11:44.607 }, 00:11:44.607 { 00:11:44.607 "name": "BaseBdev2", 00:11:44.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.607 "is_configured": false, 00:11:44.607 "data_offset": 0, 00:11:44.607 "data_size": 0 00:11:44.607 }, 00:11:44.607 { 00:11:44.607 "name": "BaseBdev3", 00:11:44.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.607 "is_configured": false, 00:11:44.607 "data_offset": 0, 00:11:44.607 "data_size": 0 00:11:44.607 }, 00:11:44.607 { 00:11:44.607 "name": "BaseBdev4", 00:11:44.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.607 "is_configured": false, 00:11:44.607 "data_offset": 0, 00:11:44.607 "data_size": 0 00:11:44.607 } 00:11:44.607 ] 00:11:44.607 }' 00:11:44.607 03:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.607 03:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.866 [2024-11-21 03:20:32.354922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:44.866 BaseBdev2 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.866 [ 00:11:44.866 { 00:11:44.866 "name": "BaseBdev2", 00:11:44.866 "aliases": [ 00:11:44.866 "3bd8a56b-15cf-4c61-be5f-c29b9a6e77d8" 00:11:44.866 ], 00:11:44.866 "product_name": "Malloc disk", 00:11:44.866 "block_size": 512, 00:11:44.866 "num_blocks": 65536, 00:11:44.866 "uuid": "3bd8a56b-15cf-4c61-be5f-c29b9a6e77d8", 00:11:44.866 "assigned_rate_limits": { 00:11:44.866 "rw_ios_per_sec": 0, 00:11:44.866 "rw_mbytes_per_sec": 0, 00:11:44.866 "r_mbytes_per_sec": 0, 00:11:44.866 "w_mbytes_per_sec": 0 00:11:44.866 }, 00:11:44.866 "claimed": true, 00:11:44.866 "claim_type": "exclusive_write", 00:11:44.866 "zoned": false, 00:11:44.866 "supported_io_types": { 00:11:44.866 "read": true, 00:11:44.866 "write": true, 00:11:44.866 "unmap": true, 00:11:44.866 "flush": true, 00:11:44.866 "reset": true, 00:11:44.866 "nvme_admin": false, 00:11:44.866 "nvme_io": false, 00:11:44.866 "nvme_io_md": false, 00:11:44.866 "write_zeroes": true, 00:11:44.866 "zcopy": true, 00:11:44.866 "get_zone_info": false, 00:11:44.866 "zone_management": false, 00:11:44.866 "zone_append": false, 00:11:44.866 "compare": false, 00:11:44.866 "compare_and_write": false, 00:11:44.866 "abort": true, 00:11:44.866 "seek_hole": false, 00:11:44.866 "seek_data": false, 00:11:44.866 "copy": true, 00:11:44.866 "nvme_iov_md": false 00:11:44.866 }, 00:11:44.866 "memory_domains": [ 00:11:44.866 { 00:11:44.866 "dma_device_id": "system", 00:11:44.866 "dma_device_type": 1 00:11:44.866 }, 00:11:44.866 { 00:11:44.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.866 "dma_device_type": 2 00:11:44.866 } 00:11:44.866 ], 00:11:44.866 "driver_specific": {} 00:11:44.866 } 00:11:44.866 ] 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.866 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.125 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.125 "name": "Existed_Raid", 00:11:45.125 "uuid": "ffd2ffe2-4a9b-4ee5-ae0b-a73aea38019c", 00:11:45.125 "strip_size_kb": 0, 00:11:45.125 "state": "configuring", 00:11:45.125 "raid_level": "raid1", 00:11:45.125 "superblock": true, 00:11:45.125 "num_base_bdevs": 4, 00:11:45.125 "num_base_bdevs_discovered": 2, 00:11:45.125 "num_base_bdevs_operational": 4, 00:11:45.125 "base_bdevs_list": [ 00:11:45.125 { 00:11:45.125 "name": "BaseBdev1", 00:11:45.125 "uuid": "b7176c31-dc3c-44ba-aac5-27fabae9c7ae", 00:11:45.125 "is_configured": true, 00:11:45.125 "data_offset": 2048, 00:11:45.125 "data_size": 63488 00:11:45.125 }, 00:11:45.125 { 00:11:45.125 "name": "BaseBdev2", 00:11:45.125 "uuid": "3bd8a56b-15cf-4c61-be5f-c29b9a6e77d8", 00:11:45.125 "is_configured": true, 00:11:45.125 "data_offset": 2048, 00:11:45.125 "data_size": 63488 00:11:45.125 }, 00:11:45.125 { 00:11:45.125 "name": "BaseBdev3", 00:11:45.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.125 "is_configured": false, 00:11:45.125 "data_offset": 0, 00:11:45.125 "data_size": 0 00:11:45.125 }, 00:11:45.125 { 00:11:45.125 "name": "BaseBdev4", 00:11:45.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.125 "is_configured": false, 00:11:45.125 "data_offset": 0, 00:11:45.125 "data_size": 0 00:11:45.125 } 00:11:45.125 ] 00:11:45.125 }' 00:11:45.125 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.125 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.385 [2024-11-21 03:20:32.851705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:45.385 BaseBdev3 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.385 [ 00:11:45.385 { 00:11:45.385 "name": "BaseBdev3", 00:11:45.385 "aliases": [ 00:11:45.385 "7c742588-ac3d-4d8a-b107-505fb1d56cb4" 00:11:45.385 ], 00:11:45.385 "product_name": "Malloc disk", 00:11:45.385 "block_size": 512, 00:11:45.385 "num_blocks": 65536, 00:11:45.385 "uuid": "7c742588-ac3d-4d8a-b107-505fb1d56cb4", 00:11:45.385 "assigned_rate_limits": { 00:11:45.385 "rw_ios_per_sec": 0, 00:11:45.385 "rw_mbytes_per_sec": 0, 00:11:45.385 "r_mbytes_per_sec": 0, 00:11:45.385 "w_mbytes_per_sec": 0 00:11:45.385 }, 00:11:45.385 "claimed": true, 00:11:45.385 "claim_type": "exclusive_write", 00:11:45.385 "zoned": false, 00:11:45.385 "supported_io_types": { 00:11:45.385 "read": true, 00:11:45.385 "write": true, 00:11:45.385 "unmap": true, 00:11:45.385 "flush": true, 00:11:45.385 "reset": true, 00:11:45.385 "nvme_admin": false, 00:11:45.385 "nvme_io": false, 00:11:45.385 "nvme_io_md": false, 00:11:45.385 "write_zeroes": true, 00:11:45.385 "zcopy": true, 00:11:45.385 "get_zone_info": false, 00:11:45.385 "zone_management": false, 00:11:45.385 "zone_append": false, 00:11:45.385 "compare": false, 00:11:45.385 "compare_and_write": false, 00:11:45.385 "abort": true, 00:11:45.385 "seek_hole": false, 00:11:45.385 "seek_data": false, 00:11:45.385 "copy": true, 00:11:45.385 "nvme_iov_md": false 00:11:45.385 }, 00:11:45.385 "memory_domains": [ 00:11:45.385 { 00:11:45.385 "dma_device_id": "system", 00:11:45.385 "dma_device_type": 1 00:11:45.385 }, 00:11:45.385 { 00:11:45.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.385 "dma_device_type": 2 00:11:45.385 } 00:11:45.385 ], 00:11:45.385 "driver_specific": {} 00:11:45.385 } 00:11:45.385 ] 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.385 "name": "Existed_Raid", 00:11:45.385 "uuid": "ffd2ffe2-4a9b-4ee5-ae0b-a73aea38019c", 00:11:45.385 "strip_size_kb": 0, 00:11:45.385 "state": "configuring", 00:11:45.385 "raid_level": "raid1", 00:11:45.385 "superblock": true, 00:11:45.385 "num_base_bdevs": 4, 00:11:45.385 "num_base_bdevs_discovered": 3, 00:11:45.385 "num_base_bdevs_operational": 4, 00:11:45.385 "base_bdevs_list": [ 00:11:45.385 { 00:11:45.385 "name": "BaseBdev1", 00:11:45.385 "uuid": "b7176c31-dc3c-44ba-aac5-27fabae9c7ae", 00:11:45.385 "is_configured": true, 00:11:45.385 "data_offset": 2048, 00:11:45.385 "data_size": 63488 00:11:45.385 }, 00:11:45.385 { 00:11:45.385 "name": "BaseBdev2", 00:11:45.385 "uuid": "3bd8a56b-15cf-4c61-be5f-c29b9a6e77d8", 00:11:45.385 "is_configured": true, 00:11:45.385 "data_offset": 2048, 00:11:45.385 "data_size": 63488 00:11:45.385 }, 00:11:45.385 { 00:11:45.385 "name": "BaseBdev3", 00:11:45.385 "uuid": "7c742588-ac3d-4d8a-b107-505fb1d56cb4", 00:11:45.385 "is_configured": true, 00:11:45.385 "data_offset": 2048, 00:11:45.385 "data_size": 63488 00:11:45.385 }, 00:11:45.385 { 00:11:45.385 "name": "BaseBdev4", 00:11:45.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.385 "is_configured": false, 00:11:45.385 "data_offset": 0, 00:11:45.385 "data_size": 0 00:11:45.385 } 00:11:45.385 ] 00:11:45.385 }' 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.385 03:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.953 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:45.953 03:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.953 03:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.953 [2024-11-21 03:20:33.347279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:45.953 [2024-11-21 03:20:33.347590] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:11:45.953 [2024-11-21 03:20:33.347664] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:45.953 BaseBdev4 00:11:45.953 [2024-11-21 03:20:33.347979] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:11:45.953 [2024-11-21 03:20:33.348195] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:11:45.953 [2024-11-21 03:20:33.348266] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:11:45.953 [2024-11-21 03:20:33.348451] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.953 03:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.953 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:45.953 03:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:45.953 03:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:45.953 03:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:45.953 03:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:45.953 03:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:45.953 03:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:45.953 03:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.953 03:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.953 03:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.953 03:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:45.953 03:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.953 03:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.953 [ 00:11:45.953 { 00:11:45.953 "name": "BaseBdev4", 00:11:45.953 "aliases": [ 00:11:45.954 "7125f83e-9cb0-46c0-b48d-6df51e7e456b" 00:11:45.954 ], 00:11:45.954 "product_name": "Malloc disk", 00:11:45.954 "block_size": 512, 00:11:45.954 "num_blocks": 65536, 00:11:45.954 "uuid": "7125f83e-9cb0-46c0-b48d-6df51e7e456b", 00:11:45.954 "assigned_rate_limits": { 00:11:45.954 "rw_ios_per_sec": 0, 00:11:45.954 "rw_mbytes_per_sec": 0, 00:11:45.954 "r_mbytes_per_sec": 0, 00:11:45.954 "w_mbytes_per_sec": 0 00:11:45.954 }, 00:11:45.954 "claimed": true, 00:11:45.954 "claim_type": "exclusive_write", 00:11:45.954 "zoned": false, 00:11:45.954 "supported_io_types": { 00:11:45.954 "read": true, 00:11:45.954 "write": true, 00:11:45.954 "unmap": true, 00:11:45.954 "flush": true, 00:11:45.954 "reset": true, 00:11:45.954 "nvme_admin": false, 00:11:45.954 "nvme_io": false, 00:11:45.954 "nvme_io_md": false, 00:11:45.954 "write_zeroes": true, 00:11:45.954 "zcopy": true, 00:11:45.954 "get_zone_info": false, 00:11:45.954 "zone_management": false, 00:11:45.954 "zone_append": false, 00:11:45.954 "compare": false, 00:11:45.954 "compare_and_write": false, 00:11:45.954 "abort": true, 00:11:45.954 "seek_hole": false, 00:11:45.954 "seek_data": false, 00:11:45.954 "copy": true, 00:11:45.954 "nvme_iov_md": false 00:11:45.954 }, 00:11:45.954 "memory_domains": [ 00:11:45.954 { 00:11:45.954 "dma_device_id": "system", 00:11:45.954 "dma_device_type": 1 00:11:45.954 }, 00:11:45.954 { 00:11:45.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.954 "dma_device_type": 2 00:11:45.954 } 00:11:45.954 ], 00:11:45.954 "driver_specific": {} 00:11:45.954 } 00:11:45.954 ] 00:11:45.954 03:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.954 03:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:45.954 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:45.954 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:45.954 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:45.954 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.954 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.954 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.954 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.954 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.954 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.954 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.954 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.954 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.954 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.954 03:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.954 03:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.954 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.954 03:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.954 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.954 "name": "Existed_Raid", 00:11:45.954 "uuid": "ffd2ffe2-4a9b-4ee5-ae0b-a73aea38019c", 00:11:45.954 "strip_size_kb": 0, 00:11:45.954 "state": "online", 00:11:45.954 "raid_level": "raid1", 00:11:45.954 "superblock": true, 00:11:45.954 "num_base_bdevs": 4, 00:11:45.954 "num_base_bdevs_discovered": 4, 00:11:45.954 "num_base_bdevs_operational": 4, 00:11:45.954 "base_bdevs_list": [ 00:11:45.954 { 00:11:45.954 "name": "BaseBdev1", 00:11:45.954 "uuid": "b7176c31-dc3c-44ba-aac5-27fabae9c7ae", 00:11:45.954 "is_configured": true, 00:11:45.954 "data_offset": 2048, 00:11:45.954 "data_size": 63488 00:11:45.954 }, 00:11:45.954 { 00:11:45.954 "name": "BaseBdev2", 00:11:45.954 "uuid": "3bd8a56b-15cf-4c61-be5f-c29b9a6e77d8", 00:11:45.954 "is_configured": true, 00:11:45.954 "data_offset": 2048, 00:11:45.954 "data_size": 63488 00:11:45.954 }, 00:11:45.954 { 00:11:45.954 "name": "BaseBdev3", 00:11:45.954 "uuid": "7c742588-ac3d-4d8a-b107-505fb1d56cb4", 00:11:45.954 "is_configured": true, 00:11:45.954 "data_offset": 2048, 00:11:45.954 "data_size": 63488 00:11:45.954 }, 00:11:45.954 { 00:11:45.954 "name": "BaseBdev4", 00:11:45.954 "uuid": "7125f83e-9cb0-46c0-b48d-6df51e7e456b", 00:11:45.954 "is_configured": true, 00:11:45.954 "data_offset": 2048, 00:11:45.954 "data_size": 63488 00:11:45.954 } 00:11:45.954 ] 00:11:45.954 }' 00:11:45.954 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.954 03:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.523 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:46.523 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:46.523 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:46.523 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:46.523 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:46.523 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:46.523 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:46.523 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:46.523 03:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.523 03:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.523 [2024-11-21 03:20:33.827804] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:46.523 03:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.523 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:46.523 "name": "Existed_Raid", 00:11:46.523 "aliases": [ 00:11:46.523 "ffd2ffe2-4a9b-4ee5-ae0b-a73aea38019c" 00:11:46.523 ], 00:11:46.523 "product_name": "Raid Volume", 00:11:46.523 "block_size": 512, 00:11:46.523 "num_blocks": 63488, 00:11:46.523 "uuid": "ffd2ffe2-4a9b-4ee5-ae0b-a73aea38019c", 00:11:46.523 "assigned_rate_limits": { 00:11:46.523 "rw_ios_per_sec": 0, 00:11:46.523 "rw_mbytes_per_sec": 0, 00:11:46.523 "r_mbytes_per_sec": 0, 00:11:46.523 "w_mbytes_per_sec": 0 00:11:46.523 }, 00:11:46.523 "claimed": false, 00:11:46.523 "zoned": false, 00:11:46.523 "supported_io_types": { 00:11:46.523 "read": true, 00:11:46.523 "write": true, 00:11:46.523 "unmap": false, 00:11:46.523 "flush": false, 00:11:46.523 "reset": true, 00:11:46.523 "nvme_admin": false, 00:11:46.523 "nvme_io": false, 00:11:46.523 "nvme_io_md": false, 00:11:46.523 "write_zeroes": true, 00:11:46.523 "zcopy": false, 00:11:46.523 "get_zone_info": false, 00:11:46.523 "zone_management": false, 00:11:46.523 "zone_append": false, 00:11:46.523 "compare": false, 00:11:46.523 "compare_and_write": false, 00:11:46.523 "abort": false, 00:11:46.523 "seek_hole": false, 00:11:46.523 "seek_data": false, 00:11:46.523 "copy": false, 00:11:46.523 "nvme_iov_md": false 00:11:46.523 }, 00:11:46.523 "memory_domains": [ 00:11:46.523 { 00:11:46.523 "dma_device_id": "system", 00:11:46.523 "dma_device_type": 1 00:11:46.523 }, 00:11:46.523 { 00:11:46.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.523 "dma_device_type": 2 00:11:46.523 }, 00:11:46.523 { 00:11:46.523 "dma_device_id": "system", 00:11:46.523 "dma_device_type": 1 00:11:46.523 }, 00:11:46.523 { 00:11:46.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.523 "dma_device_type": 2 00:11:46.523 }, 00:11:46.523 { 00:11:46.523 "dma_device_id": "system", 00:11:46.523 "dma_device_type": 1 00:11:46.523 }, 00:11:46.523 { 00:11:46.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.523 "dma_device_type": 2 00:11:46.523 }, 00:11:46.523 { 00:11:46.523 "dma_device_id": "system", 00:11:46.523 "dma_device_type": 1 00:11:46.523 }, 00:11:46.523 { 00:11:46.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.523 "dma_device_type": 2 00:11:46.523 } 00:11:46.523 ], 00:11:46.523 "driver_specific": { 00:11:46.523 "raid": { 00:11:46.523 "uuid": "ffd2ffe2-4a9b-4ee5-ae0b-a73aea38019c", 00:11:46.523 "strip_size_kb": 0, 00:11:46.523 "state": "online", 00:11:46.523 "raid_level": "raid1", 00:11:46.523 "superblock": true, 00:11:46.523 "num_base_bdevs": 4, 00:11:46.523 "num_base_bdevs_discovered": 4, 00:11:46.523 "num_base_bdevs_operational": 4, 00:11:46.523 "base_bdevs_list": [ 00:11:46.523 { 00:11:46.523 "name": "BaseBdev1", 00:11:46.523 "uuid": "b7176c31-dc3c-44ba-aac5-27fabae9c7ae", 00:11:46.523 "is_configured": true, 00:11:46.523 "data_offset": 2048, 00:11:46.523 "data_size": 63488 00:11:46.523 }, 00:11:46.523 { 00:11:46.523 "name": "BaseBdev2", 00:11:46.523 "uuid": "3bd8a56b-15cf-4c61-be5f-c29b9a6e77d8", 00:11:46.523 "is_configured": true, 00:11:46.523 "data_offset": 2048, 00:11:46.523 "data_size": 63488 00:11:46.523 }, 00:11:46.523 { 00:11:46.523 "name": "BaseBdev3", 00:11:46.523 "uuid": "7c742588-ac3d-4d8a-b107-505fb1d56cb4", 00:11:46.523 "is_configured": true, 00:11:46.523 "data_offset": 2048, 00:11:46.523 "data_size": 63488 00:11:46.523 }, 00:11:46.523 { 00:11:46.523 "name": "BaseBdev4", 00:11:46.523 "uuid": "7125f83e-9cb0-46c0-b48d-6df51e7e456b", 00:11:46.523 "is_configured": true, 00:11:46.523 "data_offset": 2048, 00:11:46.523 "data_size": 63488 00:11:46.523 } 00:11:46.523 ] 00:11:46.523 } 00:11:46.523 } 00:11:46.523 }' 00:11:46.523 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:46.523 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:46.523 BaseBdev2 00:11:46.523 BaseBdev3 00:11:46.523 BaseBdev4' 00:11:46.523 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.523 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:46.523 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.523 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:46.523 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.523 03:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.523 03:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.523 03:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.523 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.523 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.523 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.523 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:46.523 03:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.523 03:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.523 03:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.523 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.523 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.523 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.523 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.523 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:46.523 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.523 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.523 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.523 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.784 [2024-11-21 03:20:34.143657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.784 "name": "Existed_Raid", 00:11:46.784 "uuid": "ffd2ffe2-4a9b-4ee5-ae0b-a73aea38019c", 00:11:46.784 "strip_size_kb": 0, 00:11:46.784 "state": "online", 00:11:46.784 "raid_level": "raid1", 00:11:46.784 "superblock": true, 00:11:46.784 "num_base_bdevs": 4, 00:11:46.784 "num_base_bdevs_discovered": 3, 00:11:46.784 "num_base_bdevs_operational": 3, 00:11:46.784 "base_bdevs_list": [ 00:11:46.784 { 00:11:46.784 "name": null, 00:11:46.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.784 "is_configured": false, 00:11:46.784 "data_offset": 0, 00:11:46.784 "data_size": 63488 00:11:46.784 }, 00:11:46.784 { 00:11:46.784 "name": "BaseBdev2", 00:11:46.784 "uuid": "3bd8a56b-15cf-4c61-be5f-c29b9a6e77d8", 00:11:46.784 "is_configured": true, 00:11:46.784 "data_offset": 2048, 00:11:46.784 "data_size": 63488 00:11:46.784 }, 00:11:46.784 { 00:11:46.784 "name": "BaseBdev3", 00:11:46.784 "uuid": "7c742588-ac3d-4d8a-b107-505fb1d56cb4", 00:11:46.784 "is_configured": true, 00:11:46.784 "data_offset": 2048, 00:11:46.784 "data_size": 63488 00:11:46.784 }, 00:11:46.784 { 00:11:46.784 "name": "BaseBdev4", 00:11:46.784 "uuid": "7125f83e-9cb0-46c0-b48d-6df51e7e456b", 00:11:46.784 "is_configured": true, 00:11:46.784 "data_offset": 2048, 00:11:46.784 "data_size": 63488 00:11:46.784 } 00:11:46.784 ] 00:11:46.784 }' 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.784 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.045 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:47.045 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:47.045 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.045 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:47.045 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.045 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.045 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.305 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:47.305 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:47.305 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:47.305 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.305 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.305 [2024-11-21 03:20:34.623462] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:47.305 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.305 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:47.305 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:47.305 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.305 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.305 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.305 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:47.305 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.305 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:47.305 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:47.305 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:47.305 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.305 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.305 [2024-11-21 03:20:34.691011] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:47.305 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.305 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:47.305 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:47.305 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:47.305 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.305 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.305 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.305 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.305 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:47.305 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:47.305 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:47.305 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.305 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.305 [2024-11-21 03:20:34.758722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:47.305 [2024-11-21 03:20:34.758927] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:47.305 [2024-11-21 03:20:34.770734] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:47.305 [2024-11-21 03:20:34.770858] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:47.305 [2024-11-21 03:20:34.770899] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:11:47.305 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.306 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:47.306 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:47.306 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.306 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.306 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:47.306 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.306 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.306 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:47.306 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:47.306 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:47.306 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:47.306 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:47.306 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:47.306 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.306 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.306 BaseBdev2 00:11:47.306 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.306 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:47.306 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:47.306 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:47.306 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:47.306 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:47.306 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:47.306 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:47.306 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.306 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.306 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.306 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:47.306 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.306 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.306 [ 00:11:47.306 { 00:11:47.306 "name": "BaseBdev2", 00:11:47.306 "aliases": [ 00:11:47.306 "2f0648e5-d7d2-4918-8bf3-f37205eaeb0a" 00:11:47.306 ], 00:11:47.306 "product_name": "Malloc disk", 00:11:47.306 "block_size": 512, 00:11:47.306 "num_blocks": 65536, 00:11:47.306 "uuid": "2f0648e5-d7d2-4918-8bf3-f37205eaeb0a", 00:11:47.306 "assigned_rate_limits": { 00:11:47.306 "rw_ios_per_sec": 0, 00:11:47.306 "rw_mbytes_per_sec": 0, 00:11:47.306 "r_mbytes_per_sec": 0, 00:11:47.306 "w_mbytes_per_sec": 0 00:11:47.306 }, 00:11:47.306 "claimed": false, 00:11:47.306 "zoned": false, 00:11:47.306 "supported_io_types": { 00:11:47.306 "read": true, 00:11:47.306 "write": true, 00:11:47.306 "unmap": true, 00:11:47.306 "flush": true, 00:11:47.306 "reset": true, 00:11:47.306 "nvme_admin": false, 00:11:47.306 "nvme_io": false, 00:11:47.306 "nvme_io_md": false, 00:11:47.306 "write_zeroes": true, 00:11:47.306 "zcopy": true, 00:11:47.306 "get_zone_info": false, 00:11:47.306 "zone_management": false, 00:11:47.306 "zone_append": false, 00:11:47.306 "compare": false, 00:11:47.306 "compare_and_write": false, 00:11:47.306 "abort": true, 00:11:47.306 "seek_hole": false, 00:11:47.306 "seek_data": false, 00:11:47.566 "copy": true, 00:11:47.566 "nvme_iov_md": false 00:11:47.566 }, 00:11:47.566 "memory_domains": [ 00:11:47.566 { 00:11:47.566 "dma_device_id": "system", 00:11:47.566 "dma_device_type": 1 00:11:47.566 }, 00:11:47.566 { 00:11:47.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.566 "dma_device_type": 2 00:11:47.566 } 00:11:47.566 ], 00:11:47.566 "driver_specific": {} 00:11:47.566 } 00:11:47.566 ] 00:11:47.566 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.566 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:47.566 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:47.566 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:47.566 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:47.566 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.566 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.566 BaseBdev3 00:11:47.566 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.566 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:47.566 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:47.566 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:47.566 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:47.566 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:47.566 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:47.566 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:47.566 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.566 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.566 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.566 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:47.566 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.566 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.566 [ 00:11:47.566 { 00:11:47.566 "name": "BaseBdev3", 00:11:47.566 "aliases": [ 00:11:47.566 "a3b2a38d-6d39-4871-af92-5727404f8f67" 00:11:47.566 ], 00:11:47.567 "product_name": "Malloc disk", 00:11:47.567 "block_size": 512, 00:11:47.567 "num_blocks": 65536, 00:11:47.567 "uuid": "a3b2a38d-6d39-4871-af92-5727404f8f67", 00:11:47.567 "assigned_rate_limits": { 00:11:47.567 "rw_ios_per_sec": 0, 00:11:47.567 "rw_mbytes_per_sec": 0, 00:11:47.567 "r_mbytes_per_sec": 0, 00:11:47.567 "w_mbytes_per_sec": 0 00:11:47.567 }, 00:11:47.567 "claimed": false, 00:11:47.567 "zoned": false, 00:11:47.567 "supported_io_types": { 00:11:47.567 "read": true, 00:11:47.567 "write": true, 00:11:47.567 "unmap": true, 00:11:47.567 "flush": true, 00:11:47.567 "reset": true, 00:11:47.567 "nvme_admin": false, 00:11:47.567 "nvme_io": false, 00:11:47.567 "nvme_io_md": false, 00:11:47.567 "write_zeroes": true, 00:11:47.567 "zcopy": true, 00:11:47.567 "get_zone_info": false, 00:11:47.567 "zone_management": false, 00:11:47.567 "zone_append": false, 00:11:47.567 "compare": false, 00:11:47.567 "compare_and_write": false, 00:11:47.567 "abort": true, 00:11:47.567 "seek_hole": false, 00:11:47.567 "seek_data": false, 00:11:47.567 "copy": true, 00:11:47.567 "nvme_iov_md": false 00:11:47.567 }, 00:11:47.567 "memory_domains": [ 00:11:47.567 { 00:11:47.567 "dma_device_id": "system", 00:11:47.567 "dma_device_type": 1 00:11:47.567 }, 00:11:47.567 { 00:11:47.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.567 "dma_device_type": 2 00:11:47.567 } 00:11:47.567 ], 00:11:47.567 "driver_specific": {} 00:11:47.567 } 00:11:47.567 ] 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.567 BaseBdev4 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.567 [ 00:11:47.567 { 00:11:47.567 "name": "BaseBdev4", 00:11:47.567 "aliases": [ 00:11:47.567 "efb9afb9-73ac-4308-9a5e-d728f91f2552" 00:11:47.567 ], 00:11:47.567 "product_name": "Malloc disk", 00:11:47.567 "block_size": 512, 00:11:47.567 "num_blocks": 65536, 00:11:47.567 "uuid": "efb9afb9-73ac-4308-9a5e-d728f91f2552", 00:11:47.567 "assigned_rate_limits": { 00:11:47.567 "rw_ios_per_sec": 0, 00:11:47.567 "rw_mbytes_per_sec": 0, 00:11:47.567 "r_mbytes_per_sec": 0, 00:11:47.567 "w_mbytes_per_sec": 0 00:11:47.567 }, 00:11:47.567 "claimed": false, 00:11:47.567 "zoned": false, 00:11:47.567 "supported_io_types": { 00:11:47.567 "read": true, 00:11:47.567 "write": true, 00:11:47.567 "unmap": true, 00:11:47.567 "flush": true, 00:11:47.567 "reset": true, 00:11:47.567 "nvme_admin": false, 00:11:47.567 "nvme_io": false, 00:11:47.567 "nvme_io_md": false, 00:11:47.567 "write_zeroes": true, 00:11:47.567 "zcopy": true, 00:11:47.567 "get_zone_info": false, 00:11:47.567 "zone_management": false, 00:11:47.567 "zone_append": false, 00:11:47.567 "compare": false, 00:11:47.567 "compare_and_write": false, 00:11:47.567 "abort": true, 00:11:47.567 "seek_hole": false, 00:11:47.567 "seek_data": false, 00:11:47.567 "copy": true, 00:11:47.567 "nvme_iov_md": false 00:11:47.567 }, 00:11:47.567 "memory_domains": [ 00:11:47.567 { 00:11:47.567 "dma_device_id": "system", 00:11:47.567 "dma_device_type": 1 00:11:47.567 }, 00:11:47.567 { 00:11:47.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.567 "dma_device_type": 2 00:11:47.567 } 00:11:47.567 ], 00:11:47.567 "driver_specific": {} 00:11:47.567 } 00:11:47.567 ] 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.567 [2024-11-21 03:20:34.977837] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:47.567 [2024-11-21 03:20:34.977939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:47.567 [2024-11-21 03:20:34.977985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:47.567 [2024-11-21 03:20:34.980099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:47.567 [2024-11-21 03:20:34.980209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.567 03:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.567 03:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.567 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.567 "name": "Existed_Raid", 00:11:47.567 "uuid": "4a561f70-96f8-4d68-bf20-f1310f2ced13", 00:11:47.567 "strip_size_kb": 0, 00:11:47.567 "state": "configuring", 00:11:47.567 "raid_level": "raid1", 00:11:47.567 "superblock": true, 00:11:47.567 "num_base_bdevs": 4, 00:11:47.567 "num_base_bdevs_discovered": 3, 00:11:47.567 "num_base_bdevs_operational": 4, 00:11:47.567 "base_bdevs_list": [ 00:11:47.567 { 00:11:47.567 "name": "BaseBdev1", 00:11:47.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.567 "is_configured": false, 00:11:47.567 "data_offset": 0, 00:11:47.567 "data_size": 0 00:11:47.567 }, 00:11:47.567 { 00:11:47.567 "name": "BaseBdev2", 00:11:47.567 "uuid": "2f0648e5-d7d2-4918-8bf3-f37205eaeb0a", 00:11:47.567 "is_configured": true, 00:11:47.567 "data_offset": 2048, 00:11:47.567 "data_size": 63488 00:11:47.567 }, 00:11:47.567 { 00:11:47.567 "name": "BaseBdev3", 00:11:47.567 "uuid": "a3b2a38d-6d39-4871-af92-5727404f8f67", 00:11:47.567 "is_configured": true, 00:11:47.567 "data_offset": 2048, 00:11:47.567 "data_size": 63488 00:11:47.567 }, 00:11:47.567 { 00:11:47.567 "name": "BaseBdev4", 00:11:47.567 "uuid": "efb9afb9-73ac-4308-9a5e-d728f91f2552", 00:11:47.567 "is_configured": true, 00:11:47.567 "data_offset": 2048, 00:11:47.567 "data_size": 63488 00:11:47.567 } 00:11:47.567 ] 00:11:47.567 }' 00:11:47.567 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.567 03:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.826 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:47.826 03:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.826 03:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.085 [2024-11-21 03:20:35.389945] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:48.085 03:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.085 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:48.085 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.085 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.085 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.085 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.085 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.085 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.085 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.085 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.085 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.085 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.085 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.085 03:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.085 03:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.085 03:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.085 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.085 "name": "Existed_Raid", 00:11:48.085 "uuid": "4a561f70-96f8-4d68-bf20-f1310f2ced13", 00:11:48.085 "strip_size_kb": 0, 00:11:48.085 "state": "configuring", 00:11:48.085 "raid_level": "raid1", 00:11:48.085 "superblock": true, 00:11:48.085 "num_base_bdevs": 4, 00:11:48.085 "num_base_bdevs_discovered": 2, 00:11:48.085 "num_base_bdevs_operational": 4, 00:11:48.085 "base_bdevs_list": [ 00:11:48.085 { 00:11:48.085 "name": "BaseBdev1", 00:11:48.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.085 "is_configured": false, 00:11:48.085 "data_offset": 0, 00:11:48.085 "data_size": 0 00:11:48.085 }, 00:11:48.085 { 00:11:48.085 "name": null, 00:11:48.085 "uuid": "2f0648e5-d7d2-4918-8bf3-f37205eaeb0a", 00:11:48.085 "is_configured": false, 00:11:48.085 "data_offset": 0, 00:11:48.085 "data_size": 63488 00:11:48.085 }, 00:11:48.085 { 00:11:48.085 "name": "BaseBdev3", 00:11:48.085 "uuid": "a3b2a38d-6d39-4871-af92-5727404f8f67", 00:11:48.085 "is_configured": true, 00:11:48.085 "data_offset": 2048, 00:11:48.085 "data_size": 63488 00:11:48.085 }, 00:11:48.085 { 00:11:48.085 "name": "BaseBdev4", 00:11:48.085 "uuid": "efb9afb9-73ac-4308-9a5e-d728f91f2552", 00:11:48.085 "is_configured": true, 00:11:48.085 "data_offset": 2048, 00:11:48.085 "data_size": 63488 00:11:48.085 } 00:11:48.085 ] 00:11:48.085 }' 00:11:48.086 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.086 03:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.345 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.345 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:48.345 03:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.345 03:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.345 03:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.345 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:48.345 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:48.345 03:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.345 03:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.345 [2024-11-21 03:20:35.873326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:48.345 BaseBdev1 00:11:48.345 03:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.345 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:48.345 03:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:48.345 03:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:48.345 03:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:48.345 03:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:48.345 03:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:48.345 03:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:48.345 03:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.345 03:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.345 03:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.345 03:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:48.345 03:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.345 03:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.345 [ 00:11:48.345 { 00:11:48.345 "name": "BaseBdev1", 00:11:48.345 "aliases": [ 00:11:48.345 "06dc968e-80b4-4ffc-9af7-491b6004cfb0" 00:11:48.345 ], 00:11:48.345 "product_name": "Malloc disk", 00:11:48.345 "block_size": 512, 00:11:48.345 "num_blocks": 65536, 00:11:48.345 "uuid": "06dc968e-80b4-4ffc-9af7-491b6004cfb0", 00:11:48.345 "assigned_rate_limits": { 00:11:48.345 "rw_ios_per_sec": 0, 00:11:48.345 "rw_mbytes_per_sec": 0, 00:11:48.345 "r_mbytes_per_sec": 0, 00:11:48.345 "w_mbytes_per_sec": 0 00:11:48.345 }, 00:11:48.345 "claimed": true, 00:11:48.345 "claim_type": "exclusive_write", 00:11:48.345 "zoned": false, 00:11:48.345 "supported_io_types": { 00:11:48.345 "read": true, 00:11:48.345 "write": true, 00:11:48.345 "unmap": true, 00:11:48.345 "flush": true, 00:11:48.345 "reset": true, 00:11:48.345 "nvme_admin": false, 00:11:48.345 "nvme_io": false, 00:11:48.345 "nvme_io_md": false, 00:11:48.345 "write_zeroes": true, 00:11:48.345 "zcopy": true, 00:11:48.345 "get_zone_info": false, 00:11:48.345 "zone_management": false, 00:11:48.345 "zone_append": false, 00:11:48.345 "compare": false, 00:11:48.345 "compare_and_write": false, 00:11:48.345 "abort": true, 00:11:48.345 "seek_hole": false, 00:11:48.345 "seek_data": false, 00:11:48.345 "copy": true, 00:11:48.345 "nvme_iov_md": false 00:11:48.345 }, 00:11:48.345 "memory_domains": [ 00:11:48.345 { 00:11:48.345 "dma_device_id": "system", 00:11:48.345 "dma_device_type": 1 00:11:48.345 }, 00:11:48.345 { 00:11:48.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.345 "dma_device_type": 2 00:11:48.345 } 00:11:48.345 ], 00:11:48.345 "driver_specific": {} 00:11:48.345 } 00:11:48.345 ] 00:11:48.345 03:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.345 03:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:48.345 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:48.345 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.345 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.345 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.345 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.345 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.345 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.605 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.605 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.605 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.605 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.605 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.605 03:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.605 03:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.605 03:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.605 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.605 "name": "Existed_Raid", 00:11:48.605 "uuid": "4a561f70-96f8-4d68-bf20-f1310f2ced13", 00:11:48.605 "strip_size_kb": 0, 00:11:48.605 "state": "configuring", 00:11:48.605 "raid_level": "raid1", 00:11:48.605 "superblock": true, 00:11:48.605 "num_base_bdevs": 4, 00:11:48.605 "num_base_bdevs_discovered": 3, 00:11:48.605 "num_base_bdevs_operational": 4, 00:11:48.605 "base_bdevs_list": [ 00:11:48.605 { 00:11:48.605 "name": "BaseBdev1", 00:11:48.605 "uuid": "06dc968e-80b4-4ffc-9af7-491b6004cfb0", 00:11:48.605 "is_configured": true, 00:11:48.605 "data_offset": 2048, 00:11:48.605 "data_size": 63488 00:11:48.605 }, 00:11:48.605 { 00:11:48.605 "name": null, 00:11:48.605 "uuid": "2f0648e5-d7d2-4918-8bf3-f37205eaeb0a", 00:11:48.605 "is_configured": false, 00:11:48.605 "data_offset": 0, 00:11:48.605 "data_size": 63488 00:11:48.605 }, 00:11:48.605 { 00:11:48.605 "name": "BaseBdev3", 00:11:48.605 "uuid": "a3b2a38d-6d39-4871-af92-5727404f8f67", 00:11:48.605 "is_configured": true, 00:11:48.605 "data_offset": 2048, 00:11:48.605 "data_size": 63488 00:11:48.605 }, 00:11:48.605 { 00:11:48.605 "name": "BaseBdev4", 00:11:48.605 "uuid": "efb9afb9-73ac-4308-9a5e-d728f91f2552", 00:11:48.605 "is_configured": true, 00:11:48.605 "data_offset": 2048, 00:11:48.605 "data_size": 63488 00:11:48.605 } 00:11:48.605 ] 00:11:48.605 }' 00:11:48.605 03:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.605 03:20:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.874 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.874 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:48.874 03:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.874 03:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.874 03:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.874 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:48.874 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:48.874 03:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.874 03:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.874 [2024-11-21 03:20:36.353536] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:48.874 03:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.874 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:48.874 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.874 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.874 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.874 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.874 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.874 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.874 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.874 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.874 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.874 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.874 03:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.874 03:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.874 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.874 03:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.874 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.874 "name": "Existed_Raid", 00:11:48.874 "uuid": "4a561f70-96f8-4d68-bf20-f1310f2ced13", 00:11:48.874 "strip_size_kb": 0, 00:11:48.874 "state": "configuring", 00:11:48.874 "raid_level": "raid1", 00:11:48.874 "superblock": true, 00:11:48.874 "num_base_bdevs": 4, 00:11:48.874 "num_base_bdevs_discovered": 2, 00:11:48.874 "num_base_bdevs_operational": 4, 00:11:48.874 "base_bdevs_list": [ 00:11:48.874 { 00:11:48.874 "name": "BaseBdev1", 00:11:48.874 "uuid": "06dc968e-80b4-4ffc-9af7-491b6004cfb0", 00:11:48.874 "is_configured": true, 00:11:48.874 "data_offset": 2048, 00:11:48.874 "data_size": 63488 00:11:48.874 }, 00:11:48.874 { 00:11:48.874 "name": null, 00:11:48.874 "uuid": "2f0648e5-d7d2-4918-8bf3-f37205eaeb0a", 00:11:48.874 "is_configured": false, 00:11:48.874 "data_offset": 0, 00:11:48.874 "data_size": 63488 00:11:48.874 }, 00:11:48.874 { 00:11:48.874 "name": null, 00:11:48.874 "uuid": "a3b2a38d-6d39-4871-af92-5727404f8f67", 00:11:48.874 "is_configured": false, 00:11:48.874 "data_offset": 0, 00:11:48.874 "data_size": 63488 00:11:48.874 }, 00:11:48.874 { 00:11:48.874 "name": "BaseBdev4", 00:11:48.874 "uuid": "efb9afb9-73ac-4308-9a5e-d728f91f2552", 00:11:48.874 "is_configured": true, 00:11:48.874 "data_offset": 2048, 00:11:48.874 "data_size": 63488 00:11:48.874 } 00:11:48.874 ] 00:11:48.874 }' 00:11:48.874 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.874 03:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.462 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:49.462 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.462 03:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.462 03:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.462 03:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.462 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:49.462 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:49.462 03:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.462 03:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.462 [2024-11-21 03:20:36.865748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:49.462 03:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.462 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:49.462 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.463 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.463 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.463 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.463 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.463 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.463 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.463 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.463 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.463 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.463 03:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.463 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.463 03:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.463 03:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.463 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.463 "name": "Existed_Raid", 00:11:49.463 "uuid": "4a561f70-96f8-4d68-bf20-f1310f2ced13", 00:11:49.463 "strip_size_kb": 0, 00:11:49.463 "state": "configuring", 00:11:49.463 "raid_level": "raid1", 00:11:49.463 "superblock": true, 00:11:49.463 "num_base_bdevs": 4, 00:11:49.463 "num_base_bdevs_discovered": 3, 00:11:49.463 "num_base_bdevs_operational": 4, 00:11:49.463 "base_bdevs_list": [ 00:11:49.463 { 00:11:49.463 "name": "BaseBdev1", 00:11:49.463 "uuid": "06dc968e-80b4-4ffc-9af7-491b6004cfb0", 00:11:49.463 "is_configured": true, 00:11:49.463 "data_offset": 2048, 00:11:49.463 "data_size": 63488 00:11:49.463 }, 00:11:49.463 { 00:11:49.463 "name": null, 00:11:49.463 "uuid": "2f0648e5-d7d2-4918-8bf3-f37205eaeb0a", 00:11:49.463 "is_configured": false, 00:11:49.463 "data_offset": 0, 00:11:49.463 "data_size": 63488 00:11:49.463 }, 00:11:49.463 { 00:11:49.463 "name": "BaseBdev3", 00:11:49.463 "uuid": "a3b2a38d-6d39-4871-af92-5727404f8f67", 00:11:49.463 "is_configured": true, 00:11:49.463 "data_offset": 2048, 00:11:49.463 "data_size": 63488 00:11:49.463 }, 00:11:49.463 { 00:11:49.463 "name": "BaseBdev4", 00:11:49.463 "uuid": "efb9afb9-73ac-4308-9a5e-d728f91f2552", 00:11:49.463 "is_configured": true, 00:11:49.463 "data_offset": 2048, 00:11:49.463 "data_size": 63488 00:11:49.463 } 00:11:49.463 ] 00:11:49.463 }' 00:11:49.463 03:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.463 03:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.032 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.032 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:50.032 03:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.032 03:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.032 03:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.032 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:50.032 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:50.032 03:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.033 03:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.033 [2024-11-21 03:20:37.365891] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:50.033 03:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.033 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:50.033 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.033 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.033 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.033 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.033 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.033 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.033 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.033 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.033 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.033 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.033 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.033 03:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.033 03:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.033 03:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.033 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.033 "name": "Existed_Raid", 00:11:50.033 "uuid": "4a561f70-96f8-4d68-bf20-f1310f2ced13", 00:11:50.033 "strip_size_kb": 0, 00:11:50.033 "state": "configuring", 00:11:50.033 "raid_level": "raid1", 00:11:50.033 "superblock": true, 00:11:50.033 "num_base_bdevs": 4, 00:11:50.033 "num_base_bdevs_discovered": 2, 00:11:50.033 "num_base_bdevs_operational": 4, 00:11:50.033 "base_bdevs_list": [ 00:11:50.033 { 00:11:50.033 "name": null, 00:11:50.033 "uuid": "06dc968e-80b4-4ffc-9af7-491b6004cfb0", 00:11:50.033 "is_configured": false, 00:11:50.033 "data_offset": 0, 00:11:50.033 "data_size": 63488 00:11:50.033 }, 00:11:50.033 { 00:11:50.033 "name": null, 00:11:50.033 "uuid": "2f0648e5-d7d2-4918-8bf3-f37205eaeb0a", 00:11:50.033 "is_configured": false, 00:11:50.033 "data_offset": 0, 00:11:50.033 "data_size": 63488 00:11:50.033 }, 00:11:50.033 { 00:11:50.033 "name": "BaseBdev3", 00:11:50.033 "uuid": "a3b2a38d-6d39-4871-af92-5727404f8f67", 00:11:50.033 "is_configured": true, 00:11:50.033 "data_offset": 2048, 00:11:50.033 "data_size": 63488 00:11:50.033 }, 00:11:50.033 { 00:11:50.033 "name": "BaseBdev4", 00:11:50.033 "uuid": "efb9afb9-73ac-4308-9a5e-d728f91f2552", 00:11:50.033 "is_configured": true, 00:11:50.033 "data_offset": 2048, 00:11:50.033 "data_size": 63488 00:11:50.033 } 00:11:50.033 ] 00:11:50.033 }' 00:11:50.033 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.033 03:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.293 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.293 03:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.293 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:50.293 03:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.553 03:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.553 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:50.553 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:50.553 03:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.553 03:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.553 [2024-11-21 03:20:37.888764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:50.553 03:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.553 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:50.553 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.553 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.553 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.553 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.553 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.553 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.553 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.553 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.553 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.553 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.553 03:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.553 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.553 03:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.553 03:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.553 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.553 "name": "Existed_Raid", 00:11:50.553 "uuid": "4a561f70-96f8-4d68-bf20-f1310f2ced13", 00:11:50.553 "strip_size_kb": 0, 00:11:50.553 "state": "configuring", 00:11:50.553 "raid_level": "raid1", 00:11:50.553 "superblock": true, 00:11:50.553 "num_base_bdevs": 4, 00:11:50.553 "num_base_bdevs_discovered": 3, 00:11:50.553 "num_base_bdevs_operational": 4, 00:11:50.553 "base_bdevs_list": [ 00:11:50.553 { 00:11:50.553 "name": null, 00:11:50.553 "uuid": "06dc968e-80b4-4ffc-9af7-491b6004cfb0", 00:11:50.553 "is_configured": false, 00:11:50.553 "data_offset": 0, 00:11:50.553 "data_size": 63488 00:11:50.553 }, 00:11:50.553 { 00:11:50.553 "name": "BaseBdev2", 00:11:50.553 "uuid": "2f0648e5-d7d2-4918-8bf3-f37205eaeb0a", 00:11:50.553 "is_configured": true, 00:11:50.553 "data_offset": 2048, 00:11:50.553 "data_size": 63488 00:11:50.553 }, 00:11:50.553 { 00:11:50.553 "name": "BaseBdev3", 00:11:50.553 "uuid": "a3b2a38d-6d39-4871-af92-5727404f8f67", 00:11:50.553 "is_configured": true, 00:11:50.553 "data_offset": 2048, 00:11:50.553 "data_size": 63488 00:11:50.553 }, 00:11:50.553 { 00:11:50.553 "name": "BaseBdev4", 00:11:50.553 "uuid": "efb9afb9-73ac-4308-9a5e-d728f91f2552", 00:11:50.553 "is_configured": true, 00:11:50.553 "data_offset": 2048, 00:11:50.553 "data_size": 63488 00:11:50.553 } 00:11:50.553 ] 00:11:50.553 }' 00:11:50.553 03:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.553 03:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.813 03:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.813 03:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.813 03:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.813 03:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:50.813 03:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.813 03:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:50.813 03:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.813 03:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:50.813 03:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.813 03:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.813 03:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.072 03:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 06dc968e-80b4-4ffc-9af7-491b6004cfb0 00:11:51.072 03:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.072 03:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.072 [2024-11-21 03:20:38.404237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:51.072 NewBaseBdev 00:11:51.072 [2024-11-21 03:20:38.404540] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:51.072 [2024-11-21 03:20:38.404559] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:51.072 [2024-11-21 03:20:38.404803] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:11:51.072 [2024-11-21 03:20:38.404925] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:51.072 [2024-11-21 03:20:38.404937] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:51.072 [2024-11-21 03:20:38.405054] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.072 03:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.072 03:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:51.072 03:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:51.072 03:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:51.072 03:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:51.072 03:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:51.072 03:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:51.072 03:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:51.072 03:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.072 03:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.072 03:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.072 03:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:51.072 03:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.072 03:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.072 [ 00:11:51.072 { 00:11:51.072 "name": "NewBaseBdev", 00:11:51.072 "aliases": [ 00:11:51.072 "06dc968e-80b4-4ffc-9af7-491b6004cfb0" 00:11:51.072 ], 00:11:51.072 "product_name": "Malloc disk", 00:11:51.072 "block_size": 512, 00:11:51.072 "num_blocks": 65536, 00:11:51.072 "uuid": "06dc968e-80b4-4ffc-9af7-491b6004cfb0", 00:11:51.072 "assigned_rate_limits": { 00:11:51.072 "rw_ios_per_sec": 0, 00:11:51.072 "rw_mbytes_per_sec": 0, 00:11:51.072 "r_mbytes_per_sec": 0, 00:11:51.072 "w_mbytes_per_sec": 0 00:11:51.072 }, 00:11:51.072 "claimed": true, 00:11:51.072 "claim_type": "exclusive_write", 00:11:51.072 "zoned": false, 00:11:51.072 "supported_io_types": { 00:11:51.072 "read": true, 00:11:51.072 "write": true, 00:11:51.072 "unmap": true, 00:11:51.072 "flush": true, 00:11:51.072 "reset": true, 00:11:51.072 "nvme_admin": false, 00:11:51.072 "nvme_io": false, 00:11:51.072 "nvme_io_md": false, 00:11:51.072 "write_zeroes": true, 00:11:51.072 "zcopy": true, 00:11:51.072 "get_zone_info": false, 00:11:51.072 "zone_management": false, 00:11:51.072 "zone_append": false, 00:11:51.072 "compare": false, 00:11:51.072 "compare_and_write": false, 00:11:51.072 "abort": true, 00:11:51.072 "seek_hole": false, 00:11:51.072 "seek_data": false, 00:11:51.072 "copy": true, 00:11:51.072 "nvme_iov_md": false 00:11:51.072 }, 00:11:51.072 "memory_domains": [ 00:11:51.072 { 00:11:51.072 "dma_device_id": "system", 00:11:51.072 "dma_device_type": 1 00:11:51.072 }, 00:11:51.072 { 00:11:51.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.072 "dma_device_type": 2 00:11:51.072 } 00:11:51.072 ], 00:11:51.072 "driver_specific": {} 00:11:51.072 } 00:11:51.072 ] 00:11:51.072 03:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.073 03:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:51.073 03:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:51.073 03:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.073 03:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.073 03:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.073 03:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.073 03:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.073 03:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.073 03:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.073 03:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.073 03:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.073 03:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.073 03:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.073 03:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.073 03:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.073 03:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.073 03:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.073 "name": "Existed_Raid", 00:11:51.073 "uuid": "4a561f70-96f8-4d68-bf20-f1310f2ced13", 00:11:51.073 "strip_size_kb": 0, 00:11:51.073 "state": "online", 00:11:51.073 "raid_level": "raid1", 00:11:51.073 "superblock": true, 00:11:51.073 "num_base_bdevs": 4, 00:11:51.073 "num_base_bdevs_discovered": 4, 00:11:51.073 "num_base_bdevs_operational": 4, 00:11:51.073 "base_bdevs_list": [ 00:11:51.073 { 00:11:51.073 "name": "NewBaseBdev", 00:11:51.073 "uuid": "06dc968e-80b4-4ffc-9af7-491b6004cfb0", 00:11:51.073 "is_configured": true, 00:11:51.073 "data_offset": 2048, 00:11:51.073 "data_size": 63488 00:11:51.073 }, 00:11:51.073 { 00:11:51.073 "name": "BaseBdev2", 00:11:51.073 "uuid": "2f0648e5-d7d2-4918-8bf3-f37205eaeb0a", 00:11:51.073 "is_configured": true, 00:11:51.073 "data_offset": 2048, 00:11:51.073 "data_size": 63488 00:11:51.073 }, 00:11:51.073 { 00:11:51.073 "name": "BaseBdev3", 00:11:51.073 "uuid": "a3b2a38d-6d39-4871-af92-5727404f8f67", 00:11:51.073 "is_configured": true, 00:11:51.073 "data_offset": 2048, 00:11:51.073 "data_size": 63488 00:11:51.073 }, 00:11:51.073 { 00:11:51.073 "name": "BaseBdev4", 00:11:51.073 "uuid": "efb9afb9-73ac-4308-9a5e-d728f91f2552", 00:11:51.073 "is_configured": true, 00:11:51.073 "data_offset": 2048, 00:11:51.073 "data_size": 63488 00:11:51.073 } 00:11:51.073 ] 00:11:51.073 }' 00:11:51.073 03:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.073 03:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.643 03:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:51.643 03:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:51.643 03:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:51.643 03:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:51.643 03:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:51.643 03:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:51.643 03:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:51.643 03:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:51.643 03:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.643 03:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.643 [2024-11-21 03:20:38.920803] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:51.643 03:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.643 03:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:51.643 "name": "Existed_Raid", 00:11:51.643 "aliases": [ 00:11:51.643 "4a561f70-96f8-4d68-bf20-f1310f2ced13" 00:11:51.643 ], 00:11:51.643 "product_name": "Raid Volume", 00:11:51.643 "block_size": 512, 00:11:51.643 "num_blocks": 63488, 00:11:51.643 "uuid": "4a561f70-96f8-4d68-bf20-f1310f2ced13", 00:11:51.643 "assigned_rate_limits": { 00:11:51.643 "rw_ios_per_sec": 0, 00:11:51.643 "rw_mbytes_per_sec": 0, 00:11:51.643 "r_mbytes_per_sec": 0, 00:11:51.643 "w_mbytes_per_sec": 0 00:11:51.643 }, 00:11:51.643 "claimed": false, 00:11:51.643 "zoned": false, 00:11:51.643 "supported_io_types": { 00:11:51.643 "read": true, 00:11:51.643 "write": true, 00:11:51.643 "unmap": false, 00:11:51.643 "flush": false, 00:11:51.643 "reset": true, 00:11:51.643 "nvme_admin": false, 00:11:51.643 "nvme_io": false, 00:11:51.643 "nvme_io_md": false, 00:11:51.643 "write_zeroes": true, 00:11:51.643 "zcopy": false, 00:11:51.643 "get_zone_info": false, 00:11:51.643 "zone_management": false, 00:11:51.643 "zone_append": false, 00:11:51.643 "compare": false, 00:11:51.643 "compare_and_write": false, 00:11:51.643 "abort": false, 00:11:51.643 "seek_hole": false, 00:11:51.643 "seek_data": false, 00:11:51.643 "copy": false, 00:11:51.643 "nvme_iov_md": false 00:11:51.643 }, 00:11:51.643 "memory_domains": [ 00:11:51.643 { 00:11:51.643 "dma_device_id": "system", 00:11:51.643 "dma_device_type": 1 00:11:51.643 }, 00:11:51.643 { 00:11:51.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.643 "dma_device_type": 2 00:11:51.643 }, 00:11:51.643 { 00:11:51.643 "dma_device_id": "system", 00:11:51.643 "dma_device_type": 1 00:11:51.643 }, 00:11:51.643 { 00:11:51.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.643 "dma_device_type": 2 00:11:51.643 }, 00:11:51.643 { 00:11:51.643 "dma_device_id": "system", 00:11:51.643 "dma_device_type": 1 00:11:51.643 }, 00:11:51.643 { 00:11:51.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.643 "dma_device_type": 2 00:11:51.643 }, 00:11:51.643 { 00:11:51.643 "dma_device_id": "system", 00:11:51.643 "dma_device_type": 1 00:11:51.643 }, 00:11:51.643 { 00:11:51.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.643 "dma_device_type": 2 00:11:51.643 } 00:11:51.643 ], 00:11:51.643 "driver_specific": { 00:11:51.643 "raid": { 00:11:51.643 "uuid": "4a561f70-96f8-4d68-bf20-f1310f2ced13", 00:11:51.643 "strip_size_kb": 0, 00:11:51.643 "state": "online", 00:11:51.643 "raid_level": "raid1", 00:11:51.643 "superblock": true, 00:11:51.643 "num_base_bdevs": 4, 00:11:51.643 "num_base_bdevs_discovered": 4, 00:11:51.643 "num_base_bdevs_operational": 4, 00:11:51.643 "base_bdevs_list": [ 00:11:51.643 { 00:11:51.643 "name": "NewBaseBdev", 00:11:51.643 "uuid": "06dc968e-80b4-4ffc-9af7-491b6004cfb0", 00:11:51.643 "is_configured": true, 00:11:51.643 "data_offset": 2048, 00:11:51.643 "data_size": 63488 00:11:51.643 }, 00:11:51.643 { 00:11:51.643 "name": "BaseBdev2", 00:11:51.643 "uuid": "2f0648e5-d7d2-4918-8bf3-f37205eaeb0a", 00:11:51.643 "is_configured": true, 00:11:51.643 "data_offset": 2048, 00:11:51.643 "data_size": 63488 00:11:51.643 }, 00:11:51.643 { 00:11:51.643 "name": "BaseBdev3", 00:11:51.643 "uuid": "a3b2a38d-6d39-4871-af92-5727404f8f67", 00:11:51.643 "is_configured": true, 00:11:51.643 "data_offset": 2048, 00:11:51.643 "data_size": 63488 00:11:51.643 }, 00:11:51.643 { 00:11:51.643 "name": "BaseBdev4", 00:11:51.643 "uuid": "efb9afb9-73ac-4308-9a5e-d728f91f2552", 00:11:51.643 "is_configured": true, 00:11:51.643 "data_offset": 2048, 00:11:51.643 "data_size": 63488 00:11:51.644 } 00:11:51.644 ] 00:11:51.644 } 00:11:51.644 } 00:11:51.644 }' 00:11:51.644 03:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:51.644 03:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:51.644 BaseBdev2 00:11:51.644 BaseBdev3 00:11:51.644 BaseBdev4' 00:11:51.644 03:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.644 03:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:51.644 03:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:51.644 03:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:51.644 03:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.644 03:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.644 03:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.644 03:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.644 03:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:51.644 03:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:51.644 03:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:51.644 03:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:51.644 03:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.644 03:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.644 03:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.644 03:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.644 03:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:51.644 03:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:51.644 03:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:51.644 03:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:51.644 03:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.644 03:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.644 03:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.644 03:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.644 03:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:51.644 03:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:51.644 03:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:51.644 03:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:51.644 03:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.644 03:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.644 03:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.903 03:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.903 03:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:51.903 03:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:51.903 03:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:51.903 03:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.903 03:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.903 [2024-11-21 03:20:39.244561] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:51.903 [2024-11-21 03:20:39.244679] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:51.903 [2024-11-21 03:20:39.244772] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:51.903 [2024-11-21 03:20:39.245083] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:51.903 [2024-11-21 03:20:39.245098] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:51.903 03:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.903 03:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 86685 00:11:51.903 03:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 86685 ']' 00:11:51.903 03:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 86685 00:11:51.903 03:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:51.903 03:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:51.903 03:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86685 00:11:51.903 03:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:51.904 03:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:51.904 03:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86685' 00:11:51.904 killing process with pid 86685 00:11:51.904 03:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 86685 00:11:51.904 [2024-11-21 03:20:39.294664] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:51.904 03:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 86685 00:11:51.904 [2024-11-21 03:20:39.337288] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:52.163 03:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:52.163 00:11:52.163 real 0m9.747s 00:11:52.163 user 0m16.618s 00:11:52.163 sys 0m2.199s 00:11:52.163 03:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:52.163 03:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.163 ************************************ 00:11:52.163 END TEST raid_state_function_test_sb 00:11:52.163 ************************************ 00:11:52.163 03:20:39 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:52.163 03:20:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:52.163 03:20:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:52.163 03:20:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:52.163 ************************************ 00:11:52.163 START TEST raid_superblock_test 00:11:52.163 ************************************ 00:11:52.163 03:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:11:52.163 03:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:52.163 03:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:52.163 03:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:52.163 03:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:52.163 03:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:52.163 03:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:52.163 03:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:52.163 03:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:52.163 03:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:52.163 03:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:52.163 03:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:52.163 03:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:52.163 03:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:52.163 03:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:52.163 03:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:52.163 03:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=87343 00:11:52.163 03:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 87343 00:11:52.163 03:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:52.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.163 03:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 87343 ']' 00:11:52.163 03:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.163 03:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:52.163 03:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.163 03:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:52.163 03:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.422 [2024-11-21 03:20:39.732170] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:11:52.422 [2024-11-21 03:20:39.732985] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87343 ] 00:11:52.422 [2024-11-21 03:20:39.875214] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:52.422 [2024-11-21 03:20:39.911708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.422 [2024-11-21 03:20:39.941938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.422 [2024-11-21 03:20:39.985593] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:52.422 [2024-11-21 03:20:39.985727] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:53.364 03:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:53.364 03:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:53.364 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:53.364 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:53.364 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:53.364 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:53.364 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:53.364 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:53.364 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:53.364 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:53.364 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:53.364 03:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.364 03:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.364 malloc1 00:11:53.364 03:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.364 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:53.364 03:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.364 03:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.364 [2024-11-21 03:20:40.617970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:53.364 [2024-11-21 03:20:40.618070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.364 [2024-11-21 03:20:40.618102] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:53.364 [2024-11-21 03:20:40.618113] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.364 [2024-11-21 03:20:40.620387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.364 [2024-11-21 03:20:40.620530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:53.364 pt1 00:11:53.364 03:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.364 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.365 malloc2 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.365 [2024-11-21 03:20:40.646888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:53.365 [2024-11-21 03:20:40.647081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.365 [2024-11-21 03:20:40.647123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:53.365 [2024-11-21 03:20:40.647158] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.365 [2024-11-21 03:20:40.649295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.365 [2024-11-21 03:20:40.649373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:53.365 pt2 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.365 malloc3 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.365 [2024-11-21 03:20:40.679915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:53.365 [2024-11-21 03:20:40.680080] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.365 [2024-11-21 03:20:40.680125] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:53.365 [2024-11-21 03:20:40.680156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.365 [2024-11-21 03:20:40.682331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.365 [2024-11-21 03:20:40.682414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:53.365 pt3 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.365 malloc4 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.365 [2024-11-21 03:20:40.722390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:53.365 [2024-11-21 03:20:40.722553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.365 [2024-11-21 03:20:40.722589] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:53.365 [2024-11-21 03:20:40.722601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.365 [2024-11-21 03:20:40.725245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.365 [2024-11-21 03:20:40.725286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:53.365 pt4 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.365 [2024-11-21 03:20:40.734431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:53.365 [2024-11-21 03:20:40.736426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:53.365 [2024-11-21 03:20:40.736560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:53.365 [2024-11-21 03:20:40.736623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:53.365 [2024-11-21 03:20:40.736819] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:11:53.365 [2024-11-21 03:20:40.736863] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:53.365 [2024-11-21 03:20:40.737190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:11:53.365 [2024-11-21 03:20:40.737388] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:11:53.365 [2024-11-21 03:20:40.737435] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:11:53.365 [2024-11-21 03:20:40.737608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.365 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.365 "name": "raid_bdev1", 00:11:53.365 "uuid": "0c000b9b-5b51-40d2-b1d2-61f857c01d25", 00:11:53.365 "strip_size_kb": 0, 00:11:53.365 "state": "online", 00:11:53.365 "raid_level": "raid1", 00:11:53.365 "superblock": true, 00:11:53.365 "num_base_bdevs": 4, 00:11:53.365 "num_base_bdevs_discovered": 4, 00:11:53.365 "num_base_bdevs_operational": 4, 00:11:53.365 "base_bdevs_list": [ 00:11:53.365 { 00:11:53.365 "name": "pt1", 00:11:53.365 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:53.365 "is_configured": true, 00:11:53.365 "data_offset": 2048, 00:11:53.366 "data_size": 63488 00:11:53.366 }, 00:11:53.366 { 00:11:53.366 "name": "pt2", 00:11:53.366 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:53.366 "is_configured": true, 00:11:53.366 "data_offset": 2048, 00:11:53.366 "data_size": 63488 00:11:53.366 }, 00:11:53.366 { 00:11:53.366 "name": "pt3", 00:11:53.366 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:53.366 "is_configured": true, 00:11:53.366 "data_offset": 2048, 00:11:53.366 "data_size": 63488 00:11:53.366 }, 00:11:53.366 { 00:11:53.366 "name": "pt4", 00:11:53.366 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:53.366 "is_configured": true, 00:11:53.366 "data_offset": 2048, 00:11:53.366 "data_size": 63488 00:11:53.366 } 00:11:53.366 ] 00:11:53.366 }' 00:11:53.366 03:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.366 03:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.626 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:53.626 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:53.626 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:53.626 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:53.626 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:53.626 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:53.626 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:53.626 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:53.626 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.626 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.626 [2024-11-21 03:20:41.174877] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:53.886 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.886 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:53.886 "name": "raid_bdev1", 00:11:53.886 "aliases": [ 00:11:53.886 "0c000b9b-5b51-40d2-b1d2-61f857c01d25" 00:11:53.886 ], 00:11:53.886 "product_name": "Raid Volume", 00:11:53.886 "block_size": 512, 00:11:53.886 "num_blocks": 63488, 00:11:53.886 "uuid": "0c000b9b-5b51-40d2-b1d2-61f857c01d25", 00:11:53.886 "assigned_rate_limits": { 00:11:53.886 "rw_ios_per_sec": 0, 00:11:53.886 "rw_mbytes_per_sec": 0, 00:11:53.886 "r_mbytes_per_sec": 0, 00:11:53.886 "w_mbytes_per_sec": 0 00:11:53.886 }, 00:11:53.886 "claimed": false, 00:11:53.886 "zoned": false, 00:11:53.886 "supported_io_types": { 00:11:53.886 "read": true, 00:11:53.886 "write": true, 00:11:53.886 "unmap": false, 00:11:53.886 "flush": false, 00:11:53.886 "reset": true, 00:11:53.886 "nvme_admin": false, 00:11:53.886 "nvme_io": false, 00:11:53.886 "nvme_io_md": false, 00:11:53.886 "write_zeroes": true, 00:11:53.886 "zcopy": false, 00:11:53.886 "get_zone_info": false, 00:11:53.886 "zone_management": false, 00:11:53.886 "zone_append": false, 00:11:53.886 "compare": false, 00:11:53.886 "compare_and_write": false, 00:11:53.886 "abort": false, 00:11:53.886 "seek_hole": false, 00:11:53.886 "seek_data": false, 00:11:53.886 "copy": false, 00:11:53.886 "nvme_iov_md": false 00:11:53.886 }, 00:11:53.886 "memory_domains": [ 00:11:53.886 { 00:11:53.886 "dma_device_id": "system", 00:11:53.886 "dma_device_type": 1 00:11:53.886 }, 00:11:53.886 { 00:11:53.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.886 "dma_device_type": 2 00:11:53.886 }, 00:11:53.886 { 00:11:53.886 "dma_device_id": "system", 00:11:53.886 "dma_device_type": 1 00:11:53.886 }, 00:11:53.886 { 00:11:53.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.886 "dma_device_type": 2 00:11:53.886 }, 00:11:53.886 { 00:11:53.886 "dma_device_id": "system", 00:11:53.886 "dma_device_type": 1 00:11:53.886 }, 00:11:53.886 { 00:11:53.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.886 "dma_device_type": 2 00:11:53.886 }, 00:11:53.886 { 00:11:53.886 "dma_device_id": "system", 00:11:53.886 "dma_device_type": 1 00:11:53.886 }, 00:11:53.886 { 00:11:53.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.886 "dma_device_type": 2 00:11:53.886 } 00:11:53.886 ], 00:11:53.886 "driver_specific": { 00:11:53.886 "raid": { 00:11:53.886 "uuid": "0c000b9b-5b51-40d2-b1d2-61f857c01d25", 00:11:53.886 "strip_size_kb": 0, 00:11:53.886 "state": "online", 00:11:53.886 "raid_level": "raid1", 00:11:53.886 "superblock": true, 00:11:53.886 "num_base_bdevs": 4, 00:11:53.886 "num_base_bdevs_discovered": 4, 00:11:53.886 "num_base_bdevs_operational": 4, 00:11:53.886 "base_bdevs_list": [ 00:11:53.886 { 00:11:53.886 "name": "pt1", 00:11:53.886 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:53.886 "is_configured": true, 00:11:53.886 "data_offset": 2048, 00:11:53.886 "data_size": 63488 00:11:53.886 }, 00:11:53.886 { 00:11:53.886 "name": "pt2", 00:11:53.886 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:53.886 "is_configured": true, 00:11:53.886 "data_offset": 2048, 00:11:53.886 "data_size": 63488 00:11:53.886 }, 00:11:53.886 { 00:11:53.886 "name": "pt3", 00:11:53.886 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:53.886 "is_configured": true, 00:11:53.886 "data_offset": 2048, 00:11:53.886 "data_size": 63488 00:11:53.886 }, 00:11:53.886 { 00:11:53.886 "name": "pt4", 00:11:53.886 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:53.886 "is_configured": true, 00:11:53.886 "data_offset": 2048, 00:11:53.886 "data_size": 63488 00:11:53.886 } 00:11:53.886 ] 00:11:53.886 } 00:11:53.886 } 00:11:53.886 }' 00:11:53.886 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:53.886 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:53.886 pt2 00:11:53.886 pt3 00:11:53.886 pt4' 00:11:53.886 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.886 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:53.886 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.886 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:53.886 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.886 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.886 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.886 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.886 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.886 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.886 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.886 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.886 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:53.886 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.886 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.886 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.886 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.886 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.886 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.886 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:53.886 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.886 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.886 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.886 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.886 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.886 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.886 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.886 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:53.887 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.887 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.887 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:54.148 [2024-11-21 03:20:41.494899] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0c000b9b-5b51-40d2-b1d2-61f857c01d25 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0c000b9b-5b51-40d2-b1d2-61f857c01d25 ']' 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.148 [2024-11-21 03:20:41.546585] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:54.148 [2024-11-21 03:20:41.546693] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:54.148 [2024-11-21 03:20:41.546795] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:54.148 [2024-11-21 03:20:41.546894] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:54.148 [2024-11-21 03:20:41.546909] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.148 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.148 [2024-11-21 03:20:41.706715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:54.148 [2024-11-21 03:20:41.708731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:54.148 [2024-11-21 03:20:41.708786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:54.148 [2024-11-21 03:20:41.708818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:54.148 [2024-11-21 03:20:41.708869] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:54.148 [2024-11-21 03:20:41.708917] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:54.148 [2024-11-21 03:20:41.708942] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:54.148 [2024-11-21 03:20:41.708960] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:54.148 [2024-11-21 03:20:41.708972] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:54.148 [2024-11-21 03:20:41.708983] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:11:54.148 request: 00:11:54.409 { 00:11:54.409 "name": "raid_bdev1", 00:11:54.409 "raid_level": "raid1", 00:11:54.409 "base_bdevs": [ 00:11:54.409 "malloc1", 00:11:54.409 "malloc2", 00:11:54.409 "malloc3", 00:11:54.409 "malloc4" 00:11:54.409 ], 00:11:54.409 "superblock": false, 00:11:54.409 "method": "bdev_raid_create", 00:11:54.409 "req_id": 1 00:11:54.409 } 00:11:54.409 Got JSON-RPC error response 00:11:54.409 response: 00:11:54.409 { 00:11:54.409 "code": -17, 00:11:54.409 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:54.409 } 00:11:54.409 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:54.409 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:54.409 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:54.409 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:54.409 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:54.409 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.409 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.409 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.409 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:54.409 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.409 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:54.409 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:54.409 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:54.409 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.409 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.409 [2024-11-21 03:20:41.774670] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:54.409 [2024-11-21 03:20:41.774755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.409 [2024-11-21 03:20:41.774773] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:54.409 [2024-11-21 03:20:41.774784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.409 [2024-11-21 03:20:41.777047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.409 [2024-11-21 03:20:41.777165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:54.409 [2024-11-21 03:20:41.777253] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:54.409 [2024-11-21 03:20:41.777304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:54.409 pt1 00:11:54.409 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.409 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:54.409 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:54.409 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.409 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.409 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.409 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.409 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.409 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.409 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.409 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.409 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.409 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.409 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.409 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.409 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.409 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.409 "name": "raid_bdev1", 00:11:54.409 "uuid": "0c000b9b-5b51-40d2-b1d2-61f857c01d25", 00:11:54.409 "strip_size_kb": 0, 00:11:54.409 "state": "configuring", 00:11:54.409 "raid_level": "raid1", 00:11:54.409 "superblock": true, 00:11:54.409 "num_base_bdevs": 4, 00:11:54.409 "num_base_bdevs_discovered": 1, 00:11:54.409 "num_base_bdevs_operational": 4, 00:11:54.409 "base_bdevs_list": [ 00:11:54.409 { 00:11:54.409 "name": "pt1", 00:11:54.409 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:54.409 "is_configured": true, 00:11:54.409 "data_offset": 2048, 00:11:54.409 "data_size": 63488 00:11:54.409 }, 00:11:54.409 { 00:11:54.409 "name": null, 00:11:54.409 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:54.409 "is_configured": false, 00:11:54.409 "data_offset": 2048, 00:11:54.409 "data_size": 63488 00:11:54.409 }, 00:11:54.409 { 00:11:54.409 "name": null, 00:11:54.409 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:54.409 "is_configured": false, 00:11:54.409 "data_offset": 2048, 00:11:54.409 "data_size": 63488 00:11:54.409 }, 00:11:54.409 { 00:11:54.409 "name": null, 00:11:54.409 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:54.409 "is_configured": false, 00:11:54.409 "data_offset": 2048, 00:11:54.409 "data_size": 63488 00:11:54.409 } 00:11:54.409 ] 00:11:54.409 }' 00:11:54.409 03:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.409 03:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.670 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:54.670 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:54.670 03:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.670 03:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.670 [2024-11-21 03:20:42.194800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:54.670 [2024-11-21 03:20:42.194965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.670 [2024-11-21 03:20:42.195002] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:54.670 [2024-11-21 03:20:42.195051] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.670 [2024-11-21 03:20:42.195482] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.670 [2024-11-21 03:20:42.195547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:54.670 [2024-11-21 03:20:42.195650] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:54.670 [2024-11-21 03:20:42.195709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:54.670 pt2 00:11:54.670 03:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.670 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:54.670 03:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.670 03:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.670 [2024-11-21 03:20:42.206786] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:54.670 03:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.670 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:54.670 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:54.670 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.670 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.670 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.670 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.670 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.670 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.670 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.670 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.670 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.670 03:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.670 03:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.670 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.930 03:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.930 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.930 "name": "raid_bdev1", 00:11:54.930 "uuid": "0c000b9b-5b51-40d2-b1d2-61f857c01d25", 00:11:54.930 "strip_size_kb": 0, 00:11:54.930 "state": "configuring", 00:11:54.930 "raid_level": "raid1", 00:11:54.930 "superblock": true, 00:11:54.930 "num_base_bdevs": 4, 00:11:54.930 "num_base_bdevs_discovered": 1, 00:11:54.930 "num_base_bdevs_operational": 4, 00:11:54.930 "base_bdevs_list": [ 00:11:54.930 { 00:11:54.930 "name": "pt1", 00:11:54.930 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:54.930 "is_configured": true, 00:11:54.930 "data_offset": 2048, 00:11:54.930 "data_size": 63488 00:11:54.930 }, 00:11:54.930 { 00:11:54.930 "name": null, 00:11:54.930 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:54.930 "is_configured": false, 00:11:54.930 "data_offset": 0, 00:11:54.930 "data_size": 63488 00:11:54.930 }, 00:11:54.930 { 00:11:54.930 "name": null, 00:11:54.930 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:54.930 "is_configured": false, 00:11:54.930 "data_offset": 2048, 00:11:54.930 "data_size": 63488 00:11:54.930 }, 00:11:54.930 { 00:11:54.930 "name": null, 00:11:54.930 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:54.930 "is_configured": false, 00:11:54.930 "data_offset": 2048, 00:11:54.930 "data_size": 63488 00:11:54.930 } 00:11:54.930 ] 00:11:54.930 }' 00:11:54.930 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.930 03:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.191 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:55.191 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:55.191 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:55.191 03:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.191 03:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.191 [2024-11-21 03:20:42.702927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:55.191 [2024-11-21 03:20:42.703048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.191 [2024-11-21 03:20:42.703071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:55.191 [2024-11-21 03:20:42.703081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.191 [2024-11-21 03:20:42.703504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.191 [2024-11-21 03:20:42.703532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:55.191 [2024-11-21 03:20:42.703608] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:55.191 [2024-11-21 03:20:42.703628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:55.191 pt2 00:11:55.191 03:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.191 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:55.191 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:55.191 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:55.191 03:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.191 03:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.191 [2024-11-21 03:20:42.710904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:55.191 [2024-11-21 03:20:42.710973] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.191 [2024-11-21 03:20:42.711009] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:55.191 [2024-11-21 03:20:42.711018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.191 [2024-11-21 03:20:42.711401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.191 [2024-11-21 03:20:42.711418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:55.191 [2024-11-21 03:20:42.711482] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:55.191 [2024-11-21 03:20:42.711499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:55.191 pt3 00:11:55.191 03:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.191 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:55.191 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:55.191 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:55.191 03:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.191 03:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.191 [2024-11-21 03:20:42.718897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:55.191 [2024-11-21 03:20:42.718954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.191 [2024-11-21 03:20:42.718977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:55.191 [2024-11-21 03:20:42.718986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.191 [2024-11-21 03:20:42.719379] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.192 [2024-11-21 03:20:42.719397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:55.192 [2024-11-21 03:20:42.719456] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:55.192 [2024-11-21 03:20:42.719473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:55.192 [2024-11-21 03:20:42.719587] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:11:55.192 [2024-11-21 03:20:42.719604] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:55.192 [2024-11-21 03:20:42.719845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:55.192 [2024-11-21 03:20:42.719971] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:11:55.192 [2024-11-21 03:20:42.719994] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:11:55.192 [2024-11-21 03:20:42.720104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.192 pt4 00:11:55.192 03:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.192 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:55.192 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:55.192 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:55.192 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:55.192 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:55.192 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.192 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.192 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.192 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.192 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.192 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.192 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.192 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.192 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.192 03:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.192 03:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.192 03:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.452 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.452 "name": "raid_bdev1", 00:11:55.452 "uuid": "0c000b9b-5b51-40d2-b1d2-61f857c01d25", 00:11:55.452 "strip_size_kb": 0, 00:11:55.452 "state": "online", 00:11:55.452 "raid_level": "raid1", 00:11:55.452 "superblock": true, 00:11:55.452 "num_base_bdevs": 4, 00:11:55.452 "num_base_bdevs_discovered": 4, 00:11:55.452 "num_base_bdevs_operational": 4, 00:11:55.452 "base_bdevs_list": [ 00:11:55.452 { 00:11:55.452 "name": "pt1", 00:11:55.452 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:55.452 "is_configured": true, 00:11:55.452 "data_offset": 2048, 00:11:55.452 "data_size": 63488 00:11:55.452 }, 00:11:55.452 { 00:11:55.452 "name": "pt2", 00:11:55.452 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:55.452 "is_configured": true, 00:11:55.452 "data_offset": 2048, 00:11:55.452 "data_size": 63488 00:11:55.452 }, 00:11:55.452 { 00:11:55.452 "name": "pt3", 00:11:55.452 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:55.452 "is_configured": true, 00:11:55.452 "data_offset": 2048, 00:11:55.452 "data_size": 63488 00:11:55.452 }, 00:11:55.452 { 00:11:55.452 "name": "pt4", 00:11:55.453 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:55.453 "is_configured": true, 00:11:55.453 "data_offset": 2048, 00:11:55.453 "data_size": 63488 00:11:55.453 } 00:11:55.453 ] 00:11:55.453 }' 00:11:55.453 03:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.453 03:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.713 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:55.713 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:55.713 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:55.713 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:55.713 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:55.713 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:55.713 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:55.713 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:55.713 03:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.713 03:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.713 [2024-11-21 03:20:43.167450] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:55.713 03:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.713 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:55.713 "name": "raid_bdev1", 00:11:55.713 "aliases": [ 00:11:55.713 "0c000b9b-5b51-40d2-b1d2-61f857c01d25" 00:11:55.713 ], 00:11:55.713 "product_name": "Raid Volume", 00:11:55.713 "block_size": 512, 00:11:55.713 "num_blocks": 63488, 00:11:55.713 "uuid": "0c000b9b-5b51-40d2-b1d2-61f857c01d25", 00:11:55.713 "assigned_rate_limits": { 00:11:55.713 "rw_ios_per_sec": 0, 00:11:55.713 "rw_mbytes_per_sec": 0, 00:11:55.713 "r_mbytes_per_sec": 0, 00:11:55.713 "w_mbytes_per_sec": 0 00:11:55.713 }, 00:11:55.713 "claimed": false, 00:11:55.713 "zoned": false, 00:11:55.713 "supported_io_types": { 00:11:55.713 "read": true, 00:11:55.713 "write": true, 00:11:55.713 "unmap": false, 00:11:55.713 "flush": false, 00:11:55.713 "reset": true, 00:11:55.713 "nvme_admin": false, 00:11:55.713 "nvme_io": false, 00:11:55.713 "nvme_io_md": false, 00:11:55.713 "write_zeroes": true, 00:11:55.713 "zcopy": false, 00:11:55.713 "get_zone_info": false, 00:11:55.713 "zone_management": false, 00:11:55.713 "zone_append": false, 00:11:55.713 "compare": false, 00:11:55.713 "compare_and_write": false, 00:11:55.713 "abort": false, 00:11:55.713 "seek_hole": false, 00:11:55.713 "seek_data": false, 00:11:55.713 "copy": false, 00:11:55.713 "nvme_iov_md": false 00:11:55.713 }, 00:11:55.713 "memory_domains": [ 00:11:55.713 { 00:11:55.713 "dma_device_id": "system", 00:11:55.713 "dma_device_type": 1 00:11:55.713 }, 00:11:55.713 { 00:11:55.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.713 "dma_device_type": 2 00:11:55.713 }, 00:11:55.713 { 00:11:55.713 "dma_device_id": "system", 00:11:55.713 "dma_device_type": 1 00:11:55.713 }, 00:11:55.713 { 00:11:55.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.713 "dma_device_type": 2 00:11:55.713 }, 00:11:55.713 { 00:11:55.713 "dma_device_id": "system", 00:11:55.713 "dma_device_type": 1 00:11:55.713 }, 00:11:55.713 { 00:11:55.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.713 "dma_device_type": 2 00:11:55.713 }, 00:11:55.713 { 00:11:55.713 "dma_device_id": "system", 00:11:55.713 "dma_device_type": 1 00:11:55.713 }, 00:11:55.713 { 00:11:55.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.713 "dma_device_type": 2 00:11:55.713 } 00:11:55.713 ], 00:11:55.713 "driver_specific": { 00:11:55.713 "raid": { 00:11:55.713 "uuid": "0c000b9b-5b51-40d2-b1d2-61f857c01d25", 00:11:55.713 "strip_size_kb": 0, 00:11:55.713 "state": "online", 00:11:55.713 "raid_level": "raid1", 00:11:55.713 "superblock": true, 00:11:55.713 "num_base_bdevs": 4, 00:11:55.713 "num_base_bdevs_discovered": 4, 00:11:55.713 "num_base_bdevs_operational": 4, 00:11:55.713 "base_bdevs_list": [ 00:11:55.713 { 00:11:55.713 "name": "pt1", 00:11:55.713 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:55.713 "is_configured": true, 00:11:55.713 "data_offset": 2048, 00:11:55.713 "data_size": 63488 00:11:55.713 }, 00:11:55.713 { 00:11:55.713 "name": "pt2", 00:11:55.713 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:55.713 "is_configured": true, 00:11:55.713 "data_offset": 2048, 00:11:55.713 "data_size": 63488 00:11:55.713 }, 00:11:55.713 { 00:11:55.713 "name": "pt3", 00:11:55.713 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:55.713 "is_configured": true, 00:11:55.713 "data_offset": 2048, 00:11:55.713 "data_size": 63488 00:11:55.713 }, 00:11:55.713 { 00:11:55.713 "name": "pt4", 00:11:55.713 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:55.713 "is_configured": true, 00:11:55.714 "data_offset": 2048, 00:11:55.714 "data_size": 63488 00:11:55.714 } 00:11:55.714 ] 00:11:55.714 } 00:11:55.714 } 00:11:55.714 }' 00:11:55.714 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:55.714 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:55.714 pt2 00:11:55.714 pt3 00:11:55.714 pt4' 00:11:55.714 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:55.975 [2024-11-21 03:20:43.479503] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0c000b9b-5b51-40d2-b1d2-61f857c01d25 '!=' 0c000b9b-5b51-40d2-b1d2-61f857c01d25 ']' 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.975 [2024-11-21 03:20:43.527247] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.975 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.236 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.236 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.236 03:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.236 03:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.236 03:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.236 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.236 "name": "raid_bdev1", 00:11:56.236 "uuid": "0c000b9b-5b51-40d2-b1d2-61f857c01d25", 00:11:56.236 "strip_size_kb": 0, 00:11:56.236 "state": "online", 00:11:56.236 "raid_level": "raid1", 00:11:56.236 "superblock": true, 00:11:56.236 "num_base_bdevs": 4, 00:11:56.236 "num_base_bdevs_discovered": 3, 00:11:56.236 "num_base_bdevs_operational": 3, 00:11:56.236 "base_bdevs_list": [ 00:11:56.236 { 00:11:56.236 "name": null, 00:11:56.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.236 "is_configured": false, 00:11:56.236 "data_offset": 0, 00:11:56.236 "data_size": 63488 00:11:56.236 }, 00:11:56.236 { 00:11:56.236 "name": "pt2", 00:11:56.236 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:56.236 "is_configured": true, 00:11:56.236 "data_offset": 2048, 00:11:56.236 "data_size": 63488 00:11:56.236 }, 00:11:56.236 { 00:11:56.236 "name": "pt3", 00:11:56.236 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:56.236 "is_configured": true, 00:11:56.236 "data_offset": 2048, 00:11:56.236 "data_size": 63488 00:11:56.236 }, 00:11:56.236 { 00:11:56.237 "name": "pt4", 00:11:56.237 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:56.237 "is_configured": true, 00:11:56.237 "data_offset": 2048, 00:11:56.237 "data_size": 63488 00:11:56.237 } 00:11:56.237 ] 00:11:56.237 }' 00:11:56.237 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.237 03:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.497 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:56.497 03:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.497 03:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.497 [2024-11-21 03:20:43.983314] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:56.497 [2024-11-21 03:20:43.983449] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:56.497 [2024-11-21 03:20:43.983552] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:56.497 [2024-11-21 03:20:43.983646] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:56.497 [2024-11-21 03:20:43.983695] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:11:56.497 03:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.497 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.497 03:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:56.497 03:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.497 03:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.497 03:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.497 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:56.497 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:56.497 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:56.497 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:56.497 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:56.497 03:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.497 03:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.497 03:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.497 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:56.497 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:56.497 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:56.497 03:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.497 03:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.757 03:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.757 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:56.757 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:56.757 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:11:56.757 03:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.757 03:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.757 03:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.757 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:56.757 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:56.757 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:56.757 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:56.757 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:56.757 03:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.757 03:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.757 [2024-11-21 03:20:44.079288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:56.757 [2024-11-21 03:20:44.079359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.757 [2024-11-21 03:20:44.079381] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:56.757 [2024-11-21 03:20:44.079390] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.757 [2024-11-21 03:20:44.081679] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.757 [2024-11-21 03:20:44.081723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:56.757 [2024-11-21 03:20:44.081803] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:56.757 [2024-11-21 03:20:44.081837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:56.757 pt2 00:11:56.757 03:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.757 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:56.757 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:56.757 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.757 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.757 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.757 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:56.757 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.757 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.757 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.757 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.757 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.757 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.758 03:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.758 03:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.758 03:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.758 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.758 "name": "raid_bdev1", 00:11:56.758 "uuid": "0c000b9b-5b51-40d2-b1d2-61f857c01d25", 00:11:56.758 "strip_size_kb": 0, 00:11:56.758 "state": "configuring", 00:11:56.758 "raid_level": "raid1", 00:11:56.758 "superblock": true, 00:11:56.758 "num_base_bdevs": 4, 00:11:56.758 "num_base_bdevs_discovered": 1, 00:11:56.758 "num_base_bdevs_operational": 3, 00:11:56.758 "base_bdevs_list": [ 00:11:56.758 { 00:11:56.758 "name": null, 00:11:56.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.758 "is_configured": false, 00:11:56.758 "data_offset": 2048, 00:11:56.758 "data_size": 63488 00:11:56.758 }, 00:11:56.758 { 00:11:56.758 "name": "pt2", 00:11:56.758 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:56.758 "is_configured": true, 00:11:56.758 "data_offset": 2048, 00:11:56.758 "data_size": 63488 00:11:56.758 }, 00:11:56.758 { 00:11:56.758 "name": null, 00:11:56.758 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:56.758 "is_configured": false, 00:11:56.758 "data_offset": 2048, 00:11:56.758 "data_size": 63488 00:11:56.758 }, 00:11:56.758 { 00:11:56.758 "name": null, 00:11:56.758 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:56.758 "is_configured": false, 00:11:56.758 "data_offset": 2048, 00:11:56.758 "data_size": 63488 00:11:56.758 } 00:11:56.758 ] 00:11:56.758 }' 00:11:56.758 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.758 03:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.018 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:57.018 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:57.018 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:57.019 03:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.019 03:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.019 [2024-11-21 03:20:44.527494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:57.019 [2024-11-21 03:20:44.527667] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.019 [2024-11-21 03:20:44.527713] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:11:57.019 [2024-11-21 03:20:44.527746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.019 [2024-11-21 03:20:44.528243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.019 [2024-11-21 03:20:44.528305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:57.019 [2024-11-21 03:20:44.528418] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:57.019 [2024-11-21 03:20:44.528465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:57.019 pt3 00:11:57.019 03:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.019 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:57.019 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.019 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.019 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.019 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.019 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:57.019 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.019 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.019 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.019 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.019 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.019 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.019 03:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.019 03:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.019 03:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.019 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.019 "name": "raid_bdev1", 00:11:57.019 "uuid": "0c000b9b-5b51-40d2-b1d2-61f857c01d25", 00:11:57.019 "strip_size_kb": 0, 00:11:57.019 "state": "configuring", 00:11:57.019 "raid_level": "raid1", 00:11:57.019 "superblock": true, 00:11:57.019 "num_base_bdevs": 4, 00:11:57.019 "num_base_bdevs_discovered": 2, 00:11:57.019 "num_base_bdevs_operational": 3, 00:11:57.019 "base_bdevs_list": [ 00:11:57.019 { 00:11:57.019 "name": null, 00:11:57.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.019 "is_configured": false, 00:11:57.019 "data_offset": 2048, 00:11:57.019 "data_size": 63488 00:11:57.019 }, 00:11:57.019 { 00:11:57.019 "name": "pt2", 00:11:57.019 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:57.019 "is_configured": true, 00:11:57.019 "data_offset": 2048, 00:11:57.019 "data_size": 63488 00:11:57.019 }, 00:11:57.019 { 00:11:57.019 "name": "pt3", 00:11:57.019 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:57.019 "is_configured": true, 00:11:57.019 "data_offset": 2048, 00:11:57.019 "data_size": 63488 00:11:57.019 }, 00:11:57.019 { 00:11:57.019 "name": null, 00:11:57.019 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:57.019 "is_configured": false, 00:11:57.019 "data_offset": 2048, 00:11:57.019 "data_size": 63488 00:11:57.019 } 00:11:57.019 ] 00:11:57.019 }' 00:11:57.019 03:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.019 03:20:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.590 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:57.590 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:57.590 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:11:57.590 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:57.590 03:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.590 03:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.590 [2024-11-21 03:20:45.019626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:57.590 [2024-11-21 03:20:45.019715] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.590 [2024-11-21 03:20:45.019750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:11:57.590 [2024-11-21 03:20:45.019760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.590 [2024-11-21 03:20:45.020197] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.590 [2024-11-21 03:20:45.020230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:57.590 [2024-11-21 03:20:45.020311] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:57.590 [2024-11-21 03:20:45.020339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:57.590 [2024-11-21 03:20:45.020450] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:57.590 [2024-11-21 03:20:45.020467] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:57.590 [2024-11-21 03:20:45.020709] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:11:57.590 [2024-11-21 03:20:45.020826] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:57.590 [2024-11-21 03:20:45.020838] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:57.590 [2024-11-21 03:20:45.020939] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.590 pt4 00:11:57.590 03:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.590 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:57.590 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.590 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.590 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.590 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.590 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:57.590 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.590 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.590 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.590 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.590 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.590 03:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.590 03:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.590 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.590 03:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.590 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.590 "name": "raid_bdev1", 00:11:57.590 "uuid": "0c000b9b-5b51-40d2-b1d2-61f857c01d25", 00:11:57.590 "strip_size_kb": 0, 00:11:57.590 "state": "online", 00:11:57.590 "raid_level": "raid1", 00:11:57.590 "superblock": true, 00:11:57.590 "num_base_bdevs": 4, 00:11:57.590 "num_base_bdevs_discovered": 3, 00:11:57.590 "num_base_bdevs_operational": 3, 00:11:57.590 "base_bdevs_list": [ 00:11:57.590 { 00:11:57.590 "name": null, 00:11:57.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.590 "is_configured": false, 00:11:57.590 "data_offset": 2048, 00:11:57.590 "data_size": 63488 00:11:57.590 }, 00:11:57.590 { 00:11:57.590 "name": "pt2", 00:11:57.590 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:57.590 "is_configured": true, 00:11:57.590 "data_offset": 2048, 00:11:57.590 "data_size": 63488 00:11:57.590 }, 00:11:57.590 { 00:11:57.590 "name": "pt3", 00:11:57.590 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:57.590 "is_configured": true, 00:11:57.590 "data_offset": 2048, 00:11:57.590 "data_size": 63488 00:11:57.590 }, 00:11:57.590 { 00:11:57.590 "name": "pt4", 00:11:57.590 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:57.590 "is_configured": true, 00:11:57.590 "data_offset": 2048, 00:11:57.590 "data_size": 63488 00:11:57.590 } 00:11:57.590 ] 00:11:57.590 }' 00:11:57.590 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.590 03:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.162 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:58.162 03:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.162 03:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.162 [2024-11-21 03:20:45.451732] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:58.162 [2024-11-21 03:20:45.451862] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:58.162 [2024-11-21 03:20:45.451974] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:58.162 [2024-11-21 03:20:45.452083] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:58.162 [2024-11-21 03:20:45.452136] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:58.162 03:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.162 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.162 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:58.162 03:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.162 03:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.162 03:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.162 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:58.162 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:58.162 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:11:58.162 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:11:58.162 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:11:58.162 03:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.162 03:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.162 03:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.162 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:58.162 03:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.162 03:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.162 [2024-11-21 03:20:45.519741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:58.162 [2024-11-21 03:20:45.519893] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.162 [2024-11-21 03:20:45.519931] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:11:58.162 [2024-11-21 03:20:45.519972] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.162 [2024-11-21 03:20:45.522267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.162 [2024-11-21 03:20:45.522313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:58.162 [2024-11-21 03:20:45.522392] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:58.162 [2024-11-21 03:20:45.522427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:58.162 [2024-11-21 03:20:45.522537] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:58.162 [2024-11-21 03:20:45.522551] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:58.162 [2024-11-21 03:20:45.522566] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:11:58.162 [2024-11-21 03:20:45.522605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:58.162 [2024-11-21 03:20:45.522693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:58.162 pt1 00:11:58.162 03:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.162 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:11:58.162 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:58.162 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.162 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.162 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.162 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.162 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:58.162 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.162 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.162 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.162 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.163 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.163 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.163 03:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.163 03:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.163 03:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.163 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.163 "name": "raid_bdev1", 00:11:58.163 "uuid": "0c000b9b-5b51-40d2-b1d2-61f857c01d25", 00:11:58.163 "strip_size_kb": 0, 00:11:58.163 "state": "configuring", 00:11:58.163 "raid_level": "raid1", 00:11:58.163 "superblock": true, 00:11:58.163 "num_base_bdevs": 4, 00:11:58.163 "num_base_bdevs_discovered": 2, 00:11:58.163 "num_base_bdevs_operational": 3, 00:11:58.163 "base_bdevs_list": [ 00:11:58.163 { 00:11:58.163 "name": null, 00:11:58.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.163 "is_configured": false, 00:11:58.163 "data_offset": 2048, 00:11:58.163 "data_size": 63488 00:11:58.163 }, 00:11:58.163 { 00:11:58.163 "name": "pt2", 00:11:58.163 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:58.163 "is_configured": true, 00:11:58.163 "data_offset": 2048, 00:11:58.163 "data_size": 63488 00:11:58.163 }, 00:11:58.163 { 00:11:58.163 "name": "pt3", 00:11:58.163 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:58.163 "is_configured": true, 00:11:58.163 "data_offset": 2048, 00:11:58.163 "data_size": 63488 00:11:58.163 }, 00:11:58.163 { 00:11:58.163 "name": null, 00:11:58.163 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:58.163 "is_configured": false, 00:11:58.163 "data_offset": 2048, 00:11:58.163 "data_size": 63488 00:11:58.163 } 00:11:58.163 ] 00:11:58.163 }' 00:11:58.163 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.163 03:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.423 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:58.423 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:58.423 03:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.423 03:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.423 03:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.423 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:58.423 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:58.423 03:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.423 03:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.423 [2024-11-21 03:20:45.971853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:58.423 [2024-11-21 03:20:45.972034] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.423 [2024-11-21 03:20:45.972078] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:11:58.423 [2024-11-21 03:20:45.972110] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.423 [2024-11-21 03:20:45.972554] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.423 [2024-11-21 03:20:45.972614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:58.423 [2024-11-21 03:20:45.972718] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:58.423 [2024-11-21 03:20:45.972768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:58.423 [2024-11-21 03:20:45.972892] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:58.423 [2024-11-21 03:20:45.972929] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:58.423 [2024-11-21 03:20:45.973209] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:58.423 [2024-11-21 03:20:45.973367] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:58.423 [2024-11-21 03:20:45.973410] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:58.423 [2024-11-21 03:20:45.973554] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.423 pt4 00:11:58.423 03:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.423 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:58.423 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.423 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.423 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.423 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.423 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:58.423 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.423 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.423 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.423 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.423 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.423 03:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.423 03:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.423 03:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.684 03:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.684 03:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.684 "name": "raid_bdev1", 00:11:58.684 "uuid": "0c000b9b-5b51-40d2-b1d2-61f857c01d25", 00:11:58.684 "strip_size_kb": 0, 00:11:58.684 "state": "online", 00:11:58.684 "raid_level": "raid1", 00:11:58.684 "superblock": true, 00:11:58.684 "num_base_bdevs": 4, 00:11:58.684 "num_base_bdevs_discovered": 3, 00:11:58.684 "num_base_bdevs_operational": 3, 00:11:58.684 "base_bdevs_list": [ 00:11:58.684 { 00:11:58.684 "name": null, 00:11:58.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.684 "is_configured": false, 00:11:58.684 "data_offset": 2048, 00:11:58.684 "data_size": 63488 00:11:58.684 }, 00:11:58.684 { 00:11:58.684 "name": "pt2", 00:11:58.684 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:58.684 "is_configured": true, 00:11:58.684 "data_offset": 2048, 00:11:58.684 "data_size": 63488 00:11:58.684 }, 00:11:58.684 { 00:11:58.684 "name": "pt3", 00:11:58.684 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:58.684 "is_configured": true, 00:11:58.684 "data_offset": 2048, 00:11:58.684 "data_size": 63488 00:11:58.684 }, 00:11:58.684 { 00:11:58.684 "name": "pt4", 00:11:58.684 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:58.684 "is_configured": true, 00:11:58.684 "data_offset": 2048, 00:11:58.684 "data_size": 63488 00:11:58.684 } 00:11:58.684 ] 00:11:58.684 }' 00:11:58.684 03:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.684 03:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.944 03:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:58.944 03:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.944 03:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.944 03:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:58.944 03:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.944 03:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:58.944 03:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:58.944 03:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.944 03:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:58.944 03:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.204 [2024-11-21 03:20:46.508412] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:59.204 03:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.204 03:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 0c000b9b-5b51-40d2-b1d2-61f857c01d25 '!=' 0c000b9b-5b51-40d2-b1d2-61f857c01d25 ']' 00:11:59.204 03:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 87343 00:11:59.204 03:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 87343 ']' 00:11:59.204 03:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 87343 00:11:59.204 03:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:59.204 03:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:59.204 03:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87343 00:11:59.204 03:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:59.204 03:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:59.204 03:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87343' 00:11:59.204 killing process with pid 87343 00:11:59.204 03:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 87343 00:11:59.204 [2024-11-21 03:20:46.585701] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:59.204 03:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 87343 00:11:59.204 [2024-11-21 03:20:46.585904] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:59.204 [2024-11-21 03:20:46.585983] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:59.204 [2024-11-21 03:20:46.585996] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:59.204 [2024-11-21 03:20:46.631171] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:59.464 03:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:59.464 ************************************ 00:11:59.464 END TEST raid_superblock_test 00:11:59.464 ************************************ 00:11:59.464 00:11:59.464 real 0m7.216s 00:11:59.464 user 0m12.124s 00:11:59.464 sys 0m1.606s 00:11:59.464 03:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:59.464 03:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.465 03:20:46 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:11:59.465 03:20:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:59.465 03:20:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:59.465 03:20:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:59.465 ************************************ 00:11:59.465 START TEST raid_read_error_test 00:11:59.465 ************************************ 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7uy6grESTt 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=87820 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 87820 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 87820 ']' 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:59.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:59.465 03:20:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.725 [2024-11-21 03:20:47.041256] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:11:59.726 [2024-11-21 03:20:47.041396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87820 ] 00:11:59.726 [2024-11-21 03:20:47.183647] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:59.726 [2024-11-21 03:20:47.223540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.726 [2024-11-21 03:20:47.253614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.986 [2024-11-21 03:20:47.296645] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:59.986 [2024-11-21 03:20:47.296800] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:00.557 03:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:00.557 03:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:00.557 03:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:00.557 03:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:00.557 03:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.557 03:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.557 BaseBdev1_malloc 00:12:00.557 03:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.557 03:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:00.557 03:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.557 03:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.557 true 00:12:00.557 03:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.557 03:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:00.557 03:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.557 03:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.557 [2024-11-21 03:20:47.916799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:00.557 [2024-11-21 03:20:47.916989] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.557 [2024-11-21 03:20:47.917043] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:00.557 [2024-11-21 03:20:47.917086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.557 [2024-11-21 03:20:47.919448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.557 [2024-11-21 03:20:47.919559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:00.558 BaseBdev1 00:12:00.558 03:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.558 03:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:00.558 03:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:00.558 03:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.558 03:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.558 BaseBdev2_malloc 00:12:00.558 03:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.558 03:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:00.558 03:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.558 03:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.558 true 00:12:00.558 03:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.558 03:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:00.558 03:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.558 03:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.558 [2024-11-21 03:20:47.957729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:00.558 [2024-11-21 03:20:47.957909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.558 [2024-11-21 03:20:47.957933] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:00.558 [2024-11-21 03:20:47.957944] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.558 [2024-11-21 03:20:47.960148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.558 [2024-11-21 03:20:47.960252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:00.558 BaseBdev2 00:12:00.558 03:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.558 03:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:00.558 03:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:00.558 03:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.558 03:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.558 BaseBdev3_malloc 00:12:00.558 03:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.558 03:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:00.558 03:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.558 03:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.558 true 00:12:00.558 03:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.558 03:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:00.558 03:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.558 03:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.558 [2024-11-21 03:20:47.998570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:00.558 [2024-11-21 03:20:47.998731] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.558 [2024-11-21 03:20:47.998756] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:00.558 [2024-11-21 03:20:47.998767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.558 [2024-11-21 03:20:48.001101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.558 [2024-11-21 03:20:48.001147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:00.558 BaseBdev3 00:12:00.558 03:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.558 03:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:00.558 03:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:00.558 03:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.558 03:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.558 BaseBdev4_malloc 00:12:00.558 03:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.558 03:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:00.558 03:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.558 03:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.558 true 00:12:00.558 03:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.558 03:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:00.558 03:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.558 03:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.558 [2024-11-21 03:20:48.049308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:00.558 [2024-11-21 03:20:48.049475] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.558 [2024-11-21 03:20:48.049502] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:00.558 [2024-11-21 03:20:48.049513] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.558 [2024-11-21 03:20:48.051854] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.558 [2024-11-21 03:20:48.051961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:00.558 BaseBdev4 00:12:00.558 03:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.558 03:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:00.558 03:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.558 03:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.558 [2024-11-21 03:20:48.061357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:00.558 [2024-11-21 03:20:48.063366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:00.558 [2024-11-21 03:20:48.063449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:00.558 [2024-11-21 03:20:48.063504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:00.558 [2024-11-21 03:20:48.063715] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:00.558 [2024-11-21 03:20:48.063731] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:00.558 [2024-11-21 03:20:48.064007] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006cb0 00:12:00.558 [2024-11-21 03:20:48.064165] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:00.558 [2024-11-21 03:20:48.064174] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:00.558 [2024-11-21 03:20:48.064332] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.558 03:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.558 03:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:00.558 03:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.558 03:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:00.558 03:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.558 03:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.558 03:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.558 03:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.558 03:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.558 03:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.558 03:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.558 03:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.558 03:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.558 03:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.558 03:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.558 03:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.558 03:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.558 "name": "raid_bdev1", 00:12:00.558 "uuid": "4bd8ab6d-a889-49bd-97ff-b673277bf71d", 00:12:00.558 "strip_size_kb": 0, 00:12:00.558 "state": "online", 00:12:00.558 "raid_level": "raid1", 00:12:00.558 "superblock": true, 00:12:00.558 "num_base_bdevs": 4, 00:12:00.558 "num_base_bdevs_discovered": 4, 00:12:00.558 "num_base_bdevs_operational": 4, 00:12:00.558 "base_bdevs_list": [ 00:12:00.558 { 00:12:00.558 "name": "BaseBdev1", 00:12:00.558 "uuid": "80d3a956-db6b-542a-8640-51ea82470479", 00:12:00.558 "is_configured": true, 00:12:00.558 "data_offset": 2048, 00:12:00.558 "data_size": 63488 00:12:00.558 }, 00:12:00.558 { 00:12:00.558 "name": "BaseBdev2", 00:12:00.558 "uuid": "2d568689-3b53-5321-b4b6-af139b70c323", 00:12:00.558 "is_configured": true, 00:12:00.558 "data_offset": 2048, 00:12:00.558 "data_size": 63488 00:12:00.558 }, 00:12:00.558 { 00:12:00.558 "name": "BaseBdev3", 00:12:00.558 "uuid": "10e41833-f7e8-567c-afab-c4c930875788", 00:12:00.558 "is_configured": true, 00:12:00.558 "data_offset": 2048, 00:12:00.558 "data_size": 63488 00:12:00.558 }, 00:12:00.558 { 00:12:00.558 "name": "BaseBdev4", 00:12:00.558 "uuid": "0377c125-540d-553e-8352-51dc34abe58c", 00:12:00.558 "is_configured": true, 00:12:00.558 "data_offset": 2048, 00:12:00.558 "data_size": 63488 00:12:00.558 } 00:12:00.558 ] 00:12:00.558 }' 00:12:00.819 03:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.819 03:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.079 03:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:01.079 03:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:01.079 [2024-11-21 03:20:48.585839] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006e50 00:12:02.038 03:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:02.038 03:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.038 03:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.038 03:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.038 03:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:02.038 03:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:02.038 03:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:02.038 03:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:02.038 03:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:02.038 03:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.038 03:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.038 03:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.038 03:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.038 03:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.038 03:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.038 03:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.038 03:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.038 03:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.038 03:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.038 03:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.038 03:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.038 03:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.038 03:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.038 03:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.038 "name": "raid_bdev1", 00:12:02.038 "uuid": "4bd8ab6d-a889-49bd-97ff-b673277bf71d", 00:12:02.038 "strip_size_kb": 0, 00:12:02.038 "state": "online", 00:12:02.038 "raid_level": "raid1", 00:12:02.038 "superblock": true, 00:12:02.038 "num_base_bdevs": 4, 00:12:02.038 "num_base_bdevs_discovered": 4, 00:12:02.038 "num_base_bdevs_operational": 4, 00:12:02.038 "base_bdevs_list": [ 00:12:02.038 { 00:12:02.038 "name": "BaseBdev1", 00:12:02.038 "uuid": "80d3a956-db6b-542a-8640-51ea82470479", 00:12:02.038 "is_configured": true, 00:12:02.038 "data_offset": 2048, 00:12:02.038 "data_size": 63488 00:12:02.038 }, 00:12:02.038 { 00:12:02.038 "name": "BaseBdev2", 00:12:02.038 "uuid": "2d568689-3b53-5321-b4b6-af139b70c323", 00:12:02.038 "is_configured": true, 00:12:02.038 "data_offset": 2048, 00:12:02.038 "data_size": 63488 00:12:02.038 }, 00:12:02.038 { 00:12:02.038 "name": "BaseBdev3", 00:12:02.038 "uuid": "10e41833-f7e8-567c-afab-c4c930875788", 00:12:02.038 "is_configured": true, 00:12:02.038 "data_offset": 2048, 00:12:02.038 "data_size": 63488 00:12:02.038 }, 00:12:02.038 { 00:12:02.038 "name": "BaseBdev4", 00:12:02.038 "uuid": "0377c125-540d-553e-8352-51dc34abe58c", 00:12:02.038 "is_configured": true, 00:12:02.038 "data_offset": 2048, 00:12:02.038 "data_size": 63488 00:12:02.038 } 00:12:02.038 ] 00:12:02.038 }' 00:12:02.038 03:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.038 03:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.608 03:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:02.608 03:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.608 03:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.608 [2024-11-21 03:20:49.957316] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:02.608 [2024-11-21 03:20:49.957465] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:02.608 [2024-11-21 03:20:49.960160] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:02.608 [2024-11-21 03:20:49.960260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.608 [2024-11-21 03:20:49.960399] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:02.608 [2024-11-21 03:20:49.960464] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:02.608 { 00:12:02.608 "results": [ 00:12:02.608 { 00:12:02.608 "job": "raid_bdev1", 00:12:02.608 "core_mask": "0x1", 00:12:02.608 "workload": "randrw", 00:12:02.608 "percentage": 50, 00:12:02.608 "status": "finished", 00:12:02.608 "queue_depth": 1, 00:12:02.608 "io_size": 131072, 00:12:02.608 "runtime": 1.36943, 00:12:02.608 "iops": 10748.2675273654, 00:12:02.608 "mibps": 1343.533440920675, 00:12:02.608 "io_failed": 0, 00:12:02.608 "io_timeout": 0, 00:12:02.608 "avg_latency_us": 90.34142357359256, 00:12:02.608 "min_latency_us": 23.652052645341236, 00:12:02.608 "max_latency_us": 1513.731369301839 00:12:02.608 } 00:12:02.608 ], 00:12:02.608 "core_count": 1 00:12:02.608 } 00:12:02.608 03:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.608 03:20:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 87820 00:12:02.608 03:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 87820 ']' 00:12:02.608 03:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 87820 00:12:02.608 03:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:02.608 03:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:02.608 03:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87820 00:12:02.608 03:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:02.608 03:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:02.608 03:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87820' 00:12:02.608 killing process with pid 87820 00:12:02.608 03:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 87820 00:12:02.609 [2024-11-21 03:20:49.999443] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:02.609 03:20:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 87820 00:12:02.609 [2024-11-21 03:20:50.036297] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:02.869 03:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:02.869 03:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7uy6grESTt 00:12:02.869 03:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:02.869 03:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:02.869 03:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:02.869 03:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:02.869 03:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:02.869 03:20:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:02.869 00:12:02.869 real 0m3.323s 00:12:02.869 user 0m4.170s 00:12:02.869 sys 0m0.583s 00:12:02.869 ************************************ 00:12:02.869 END TEST raid_read_error_test 00:12:02.869 ************************************ 00:12:02.869 03:20:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.869 03:20:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.869 03:20:50 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:02.869 03:20:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:02.869 03:20:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.869 03:20:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:02.869 ************************************ 00:12:02.869 START TEST raid_write_error_test 00:12:02.869 ************************************ 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.p4xE4amvkj 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=87949 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 87949 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 87949 ']' 00:12:02.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:02.869 03:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.870 03:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:02.870 03:20:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.870 [2024-11-21 03:20:50.425369] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:12:02.870 [2024-11-21 03:20:50.425506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87949 ] 00:12:03.130 [2024-11-21 03:20:50.561405] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:03.130 [2024-11-21 03:20:50.592231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.130 [2024-11-21 03:20:50.622191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.130 [2024-11-21 03:20:50.665881] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:03.130 [2024-11-21 03:20:50.665922] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:04.071 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:04.071 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:04.071 03:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:04.071 03:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:04.071 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.071 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.071 BaseBdev1_malloc 00:12:04.071 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.072 true 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.072 [2024-11-21 03:20:51.345833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:04.072 [2024-11-21 03:20:51.345908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.072 [2024-11-21 03:20:51.345928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:04.072 [2024-11-21 03:20:51.345941] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.072 [2024-11-21 03:20:51.348269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.072 [2024-11-21 03:20:51.348337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:04.072 BaseBdev1 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.072 BaseBdev2_malloc 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.072 true 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.072 [2024-11-21 03:20:51.374657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:04.072 [2024-11-21 03:20:51.374728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.072 [2024-11-21 03:20:51.374746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:04.072 [2024-11-21 03:20:51.374756] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.072 [2024-11-21 03:20:51.376950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.072 [2024-11-21 03:20:51.376995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:04.072 BaseBdev2 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.072 BaseBdev3_malloc 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.072 true 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.072 [2024-11-21 03:20:51.403476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:04.072 [2024-11-21 03:20:51.403547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.072 [2024-11-21 03:20:51.403565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:04.072 [2024-11-21 03:20:51.403577] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.072 [2024-11-21 03:20:51.405729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.072 [2024-11-21 03:20:51.405776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:04.072 BaseBdev3 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.072 BaseBdev4_malloc 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.072 true 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.072 [2024-11-21 03:20:51.443766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:04.072 [2024-11-21 03:20:51.443930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.072 [2024-11-21 03:20:51.443953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:04.072 [2024-11-21 03:20:51.443965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.072 [2024-11-21 03:20:51.446189] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.072 [2024-11-21 03:20:51.446273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:04.072 BaseBdev4 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.072 [2024-11-21 03:20:51.451795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:04.072 [2024-11-21 03:20:51.453757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:04.072 [2024-11-21 03:20:51.453834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:04.072 [2024-11-21 03:20:51.453887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:04.072 [2024-11-21 03:20:51.454116] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:04.072 [2024-11-21 03:20:51.454134] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:04.072 [2024-11-21 03:20:51.454393] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006cb0 00:12:04.072 [2024-11-21 03:20:51.454560] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:04.072 [2024-11-21 03:20:51.454571] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:04.072 [2024-11-21 03:20:51.454710] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.072 03:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.072 "name": "raid_bdev1", 00:12:04.072 "uuid": "924caa08-4e2e-4777-91b7-932c0d63c09b", 00:12:04.072 "strip_size_kb": 0, 00:12:04.072 "state": "online", 00:12:04.072 "raid_level": "raid1", 00:12:04.072 "superblock": true, 00:12:04.072 "num_base_bdevs": 4, 00:12:04.072 "num_base_bdevs_discovered": 4, 00:12:04.072 "num_base_bdevs_operational": 4, 00:12:04.072 "base_bdevs_list": [ 00:12:04.072 { 00:12:04.072 "name": "BaseBdev1", 00:12:04.072 "uuid": "263fb1bb-eae2-580a-967b-78ad5928c292", 00:12:04.073 "is_configured": true, 00:12:04.073 "data_offset": 2048, 00:12:04.073 "data_size": 63488 00:12:04.073 }, 00:12:04.073 { 00:12:04.073 "name": "BaseBdev2", 00:12:04.073 "uuid": "ab4235d9-c119-5777-9bd7-877d6f68d94f", 00:12:04.073 "is_configured": true, 00:12:04.073 "data_offset": 2048, 00:12:04.073 "data_size": 63488 00:12:04.073 }, 00:12:04.073 { 00:12:04.073 "name": "BaseBdev3", 00:12:04.073 "uuid": "9f91dbba-1d86-53c3-8176-40d3e9658870", 00:12:04.073 "is_configured": true, 00:12:04.073 "data_offset": 2048, 00:12:04.073 "data_size": 63488 00:12:04.073 }, 00:12:04.073 { 00:12:04.073 "name": "BaseBdev4", 00:12:04.073 "uuid": "df794609-2976-5e0d-a92f-62e00b214af4", 00:12:04.073 "is_configured": true, 00:12:04.073 "data_offset": 2048, 00:12:04.073 "data_size": 63488 00:12:04.073 } 00:12:04.073 ] 00:12:04.073 }' 00:12:04.073 03:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.073 03:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.333 03:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:04.333 03:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:04.593 [2024-11-21 03:20:51.908333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006e50 00:12:05.535 03:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:05.535 03:20:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.535 03:20:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.535 [2024-11-21 03:20:52.841092] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:05.535 [2024-11-21 03:20:52.841162] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:05.535 [2024-11-21 03:20:52.841409] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006e50 00:12:05.535 03:20:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.535 03:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:05.535 03:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:05.535 03:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:05.535 03:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:05.535 03:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:05.535 03:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.535 03:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.535 03:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.535 03:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.535 03:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:05.535 03:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.535 03:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.535 03:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.535 03:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.535 03:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.535 03:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.535 03:20:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.535 03:20:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.535 03:20:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.535 03:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.535 "name": "raid_bdev1", 00:12:05.535 "uuid": "924caa08-4e2e-4777-91b7-932c0d63c09b", 00:12:05.535 "strip_size_kb": 0, 00:12:05.535 "state": "online", 00:12:05.535 "raid_level": "raid1", 00:12:05.535 "superblock": true, 00:12:05.535 "num_base_bdevs": 4, 00:12:05.535 "num_base_bdevs_discovered": 3, 00:12:05.535 "num_base_bdevs_operational": 3, 00:12:05.535 "base_bdevs_list": [ 00:12:05.535 { 00:12:05.535 "name": null, 00:12:05.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.535 "is_configured": false, 00:12:05.535 "data_offset": 0, 00:12:05.535 "data_size": 63488 00:12:05.535 }, 00:12:05.535 { 00:12:05.535 "name": "BaseBdev2", 00:12:05.535 "uuid": "ab4235d9-c119-5777-9bd7-877d6f68d94f", 00:12:05.535 "is_configured": true, 00:12:05.535 "data_offset": 2048, 00:12:05.535 "data_size": 63488 00:12:05.535 }, 00:12:05.535 { 00:12:05.535 "name": "BaseBdev3", 00:12:05.535 "uuid": "9f91dbba-1d86-53c3-8176-40d3e9658870", 00:12:05.535 "is_configured": true, 00:12:05.535 "data_offset": 2048, 00:12:05.535 "data_size": 63488 00:12:05.535 }, 00:12:05.535 { 00:12:05.535 "name": "BaseBdev4", 00:12:05.535 "uuid": "df794609-2976-5e0d-a92f-62e00b214af4", 00:12:05.535 "is_configured": true, 00:12:05.535 "data_offset": 2048, 00:12:05.535 "data_size": 63488 00:12:05.535 } 00:12:05.535 ] 00:12:05.535 }' 00:12:05.535 03:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.535 03:20:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.795 03:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:05.795 03:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.795 03:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.795 [2024-11-21 03:20:53.330165] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:05.795 [2024-11-21 03:20:53.330301] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:05.795 [2024-11-21 03:20:53.332879] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:05.796 [2024-11-21 03:20:53.333006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.796 [2024-11-21 03:20:53.333126] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:05.796 [2024-11-21 03:20:53.333138] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:05.796 { 00:12:05.796 "results": [ 00:12:05.796 { 00:12:05.796 "job": "raid_bdev1", 00:12:05.796 "core_mask": "0x1", 00:12:05.796 "workload": "randrw", 00:12:05.796 "percentage": 50, 00:12:05.796 "status": "finished", 00:12:05.796 "queue_depth": 1, 00:12:05.796 "io_size": 131072, 00:12:05.796 "runtime": 1.41984, 00:12:05.796 "iops": 11697.09263015551, 00:12:05.796 "mibps": 1462.1365787694388, 00:12:05.796 "io_failed": 0, 00:12:05.796 "io_timeout": 0, 00:12:05.796 "avg_latency_us": 82.8201858531222, 00:12:05.796 "min_latency_us": 23.763618931404167, 00:12:05.796 "max_latency_us": 1449.4691885295913 00:12:05.796 } 00:12:05.796 ], 00:12:05.796 "core_count": 1 00:12:05.796 } 00:12:05.796 03:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.796 03:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 87949 00:12:05.796 03:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 87949 ']' 00:12:05.796 03:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 87949 00:12:05.796 03:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:05.796 03:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:05.796 03:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87949 00:12:06.054 killing process with pid 87949 00:12:06.054 03:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:06.054 03:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:06.054 03:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87949' 00:12:06.054 03:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 87949 00:12:06.054 [2024-11-21 03:20:53.370568] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:06.054 03:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 87949 00:12:06.054 [2024-11-21 03:20:53.406794] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:06.314 03:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:06.314 03:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.p4xE4amvkj 00:12:06.314 03:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:06.314 03:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:06.314 03:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:06.314 03:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:06.314 03:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:06.314 03:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:06.314 ************************************ 00:12:06.314 END TEST raid_write_error_test 00:12:06.314 ************************************ 00:12:06.314 00:12:06.314 real 0m3.308s 00:12:06.314 user 0m4.169s 00:12:06.314 sys 0m0.542s 00:12:06.314 03:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.314 03:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.314 03:20:53 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:06.314 03:20:53 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:06.314 03:20:53 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:06.314 03:20:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:06.314 03:20:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.314 03:20:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:06.314 ************************************ 00:12:06.314 START TEST raid_rebuild_test 00:12:06.314 ************************************ 00:12:06.314 03:20:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:12:06.314 03:20:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:06.314 03:20:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:06.314 03:20:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:06.314 03:20:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:06.314 03:20:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:06.314 03:20:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:06.314 03:20:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:06.314 03:20:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:06.314 03:20:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:06.314 03:20:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:06.314 03:20:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:06.314 03:20:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:06.314 03:20:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:06.314 03:20:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:06.314 03:20:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:06.314 03:20:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:06.314 03:20:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:06.314 03:20:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:06.314 03:20:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:06.314 03:20:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:06.314 03:20:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:06.314 03:20:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:06.314 03:20:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:06.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.314 03:20:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=88078 00:12:06.314 03:20:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 88078 00:12:06.314 03:20:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 88078 ']' 00:12:06.314 03:20:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.314 03:20:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:06.314 03:20:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.314 03:20:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:06.314 03:20:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:06.314 03:20:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.314 [2024-11-21 03:20:53.798050] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:12:06.314 [2024-11-21 03:20:53.798292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88078 ] 00:12:06.314 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:06.314 Zero copy mechanism will not be used. 00:12:06.575 [2024-11-21 03:20:53.940575] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:06.575 [2024-11-21 03:20:53.977765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.575 [2024-11-21 03:20:54.007921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.575 [2024-11-21 03:20:54.051291] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:06.575 [2024-11-21 03:20:54.051418] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:07.146 03:20:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:07.146 03:20:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:07.146 03:20:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:07.146 03:20:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:07.146 03:20:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.146 03:20:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.146 BaseBdev1_malloc 00:12:07.146 03:20:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.146 03:20:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:07.146 03:20:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.146 03:20:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.146 [2024-11-21 03:20:54.659619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:07.146 [2024-11-21 03:20:54.659716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.146 [2024-11-21 03:20:54.659750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:07.146 [2024-11-21 03:20:54.659765] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.146 [2024-11-21 03:20:54.662289] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.146 [2024-11-21 03:20:54.662341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:07.146 BaseBdev1 00:12:07.146 03:20:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.146 03:20:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:07.146 03:20:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:07.146 03:20:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.146 03:20:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.146 BaseBdev2_malloc 00:12:07.146 03:20:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.146 03:20:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:07.146 03:20:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.146 03:20:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.146 [2024-11-21 03:20:54.684494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:07.146 [2024-11-21 03:20:54.684564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.146 [2024-11-21 03:20:54.684584] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:07.146 [2024-11-21 03:20:54.684595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.146 [2024-11-21 03:20:54.686785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.146 [2024-11-21 03:20:54.686914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:07.146 BaseBdev2 00:12:07.146 03:20:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.146 03:20:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:07.146 03:20:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.146 03:20:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.146 spare_malloc 00:12:07.146 03:20:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.146 03:20:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:07.146 03:20:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.146 03:20:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.406 spare_delay 00:12:07.406 03:20:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.406 03:20:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:07.407 03:20:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.407 03:20:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.407 [2024-11-21 03:20:54.717314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:07.407 [2024-11-21 03:20:54.717472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.407 [2024-11-21 03:20:54.717498] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:07.407 [2024-11-21 03:20:54.717511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.407 [2024-11-21 03:20:54.719766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.407 [2024-11-21 03:20:54.719811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:07.407 spare 00:12:07.407 03:20:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.407 03:20:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:07.407 03:20:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.407 03:20:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.407 [2024-11-21 03:20:54.725377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:07.407 [2024-11-21 03:20:54.727499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:07.407 [2024-11-21 03:20:54.727599] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:12:07.407 [2024-11-21 03:20:54.727612] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:07.407 [2024-11-21 03:20:54.727923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:07.407 [2024-11-21 03:20:54.728104] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:12:07.407 [2024-11-21 03:20:54.728118] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:12:07.407 [2024-11-21 03:20:54.728273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:07.407 03:20:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.407 03:20:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:07.407 03:20:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.407 03:20:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.407 03:20:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.407 03:20:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.407 03:20:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:07.407 03:20:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.407 03:20:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.407 03:20:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.407 03:20:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.407 03:20:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.407 03:20:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.407 03:20:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.407 03:20:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.407 03:20:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.407 03:20:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.407 "name": "raid_bdev1", 00:12:07.407 "uuid": "bfe3cd71-ae91-4018-8ccf-32b54f4bcc03", 00:12:07.407 "strip_size_kb": 0, 00:12:07.407 "state": "online", 00:12:07.407 "raid_level": "raid1", 00:12:07.407 "superblock": false, 00:12:07.407 "num_base_bdevs": 2, 00:12:07.407 "num_base_bdevs_discovered": 2, 00:12:07.407 "num_base_bdevs_operational": 2, 00:12:07.407 "base_bdevs_list": [ 00:12:07.407 { 00:12:07.407 "name": "BaseBdev1", 00:12:07.407 "uuid": "a6281dea-1e15-513b-99f6-57682a780e10", 00:12:07.407 "is_configured": true, 00:12:07.407 "data_offset": 0, 00:12:07.407 "data_size": 65536 00:12:07.407 }, 00:12:07.407 { 00:12:07.407 "name": "BaseBdev2", 00:12:07.407 "uuid": "a3cb7cf3-6a24-5b7e-8e77-cd95081d686c", 00:12:07.407 "is_configured": true, 00:12:07.407 "data_offset": 0, 00:12:07.407 "data_size": 65536 00:12:07.407 } 00:12:07.407 ] 00:12:07.407 }' 00:12:07.407 03:20:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.407 03:20:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.667 03:20:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:07.667 03:20:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:07.667 03:20:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.667 03:20:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.667 [2024-11-21 03:20:55.129800] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:07.667 03:20:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.667 03:20:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:07.667 03:20:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.667 03:20:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:07.667 03:20:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.667 03:20:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.667 03:20:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.667 03:20:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:07.667 03:20:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:07.667 03:20:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:07.667 03:20:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:07.667 03:20:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:07.667 03:20:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:07.668 03:20:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:07.668 03:20:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:07.668 03:20:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:07.668 03:20:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:07.668 03:20:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:07.668 03:20:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:07.668 03:20:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:07.668 03:20:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:07.927 [2024-11-21 03:20:55.413650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:07.927 /dev/nbd0 00:12:07.927 03:20:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:07.927 03:20:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:07.927 03:20:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:07.927 03:20:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:07.927 03:20:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:07.927 03:20:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:07.927 03:20:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:07.927 03:20:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:07.927 03:20:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:07.927 03:20:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:07.927 03:20:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:07.927 1+0 records in 00:12:07.927 1+0 records out 00:12:07.927 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348927 s, 11.7 MB/s 00:12:07.928 03:20:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.928 03:20:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:07.928 03:20:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.928 03:20:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:07.928 03:20:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:07.928 03:20:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:07.928 03:20:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:07.928 03:20:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:07.928 03:20:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:07.928 03:20:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:12.122 65536+0 records in 00:12:12.122 65536+0 records out 00:12:12.122 33554432 bytes (34 MB, 32 MiB) copied, 4.0954 s, 8.2 MB/s 00:12:12.122 03:20:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:12.122 03:20:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:12.122 03:20:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:12.122 03:20:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:12.122 03:20:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:12.122 03:20:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:12.122 03:20:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:12.383 03:20:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:12.383 [2024-11-21 03:20:59.850662] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.383 03:20:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:12.383 03:20:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:12.383 03:20:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:12.383 03:20:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:12.383 03:20:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:12.383 03:20:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:12.383 03:20:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:12.383 03:20:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:12.383 03:20:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.383 03:20:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.383 [2024-11-21 03:20:59.871478] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:12.383 03:20:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.383 03:20:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:12.383 03:20:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.383 03:20:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.383 03:20:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.383 03:20:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.383 03:20:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:12.383 03:20:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.383 03:20:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.383 03:20:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.383 03:20:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.383 03:20:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.383 03:20:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.383 03:20:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.383 03:20:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.383 03:20:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.383 03:20:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.383 "name": "raid_bdev1", 00:12:12.383 "uuid": "bfe3cd71-ae91-4018-8ccf-32b54f4bcc03", 00:12:12.383 "strip_size_kb": 0, 00:12:12.383 "state": "online", 00:12:12.383 "raid_level": "raid1", 00:12:12.383 "superblock": false, 00:12:12.383 "num_base_bdevs": 2, 00:12:12.383 "num_base_bdevs_discovered": 1, 00:12:12.383 "num_base_bdevs_operational": 1, 00:12:12.383 "base_bdevs_list": [ 00:12:12.383 { 00:12:12.383 "name": null, 00:12:12.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.383 "is_configured": false, 00:12:12.383 "data_offset": 0, 00:12:12.383 "data_size": 65536 00:12:12.383 }, 00:12:12.383 { 00:12:12.383 "name": "BaseBdev2", 00:12:12.383 "uuid": "a3cb7cf3-6a24-5b7e-8e77-cd95081d686c", 00:12:12.383 "is_configured": true, 00:12:12.383 "data_offset": 0, 00:12:12.383 "data_size": 65536 00:12:12.383 } 00:12:12.383 ] 00:12:12.383 }' 00:12:12.383 03:20:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.383 03:20:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.951 03:21:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:12.951 03:21:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.951 03:21:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.951 [2024-11-21 03:21:00.311604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:12.951 [2024-11-21 03:21:00.327047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09fe0 00:12:12.951 03:21:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.951 03:21:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:12.951 [2024-11-21 03:21:00.329614] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:13.888 03:21:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:13.888 03:21:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.888 03:21:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:13.888 03:21:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:13.888 03:21:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.888 03:21:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.888 03:21:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.888 03:21:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.888 03:21:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.888 03:21:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.888 03:21:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.888 "name": "raid_bdev1", 00:12:13.888 "uuid": "bfe3cd71-ae91-4018-8ccf-32b54f4bcc03", 00:12:13.888 "strip_size_kb": 0, 00:12:13.888 "state": "online", 00:12:13.888 "raid_level": "raid1", 00:12:13.888 "superblock": false, 00:12:13.888 "num_base_bdevs": 2, 00:12:13.888 "num_base_bdevs_discovered": 2, 00:12:13.888 "num_base_bdevs_operational": 2, 00:12:13.888 "process": { 00:12:13.888 "type": "rebuild", 00:12:13.888 "target": "spare", 00:12:13.888 "progress": { 00:12:13.888 "blocks": 20480, 00:12:13.888 "percent": 31 00:12:13.888 } 00:12:13.888 }, 00:12:13.888 "base_bdevs_list": [ 00:12:13.888 { 00:12:13.888 "name": "spare", 00:12:13.888 "uuid": "b77c5d45-d3d7-5d3b-992c-1bc15a9da578", 00:12:13.888 "is_configured": true, 00:12:13.888 "data_offset": 0, 00:12:13.888 "data_size": 65536 00:12:13.888 }, 00:12:13.888 { 00:12:13.888 "name": "BaseBdev2", 00:12:13.889 "uuid": "a3cb7cf3-6a24-5b7e-8e77-cd95081d686c", 00:12:13.889 "is_configured": true, 00:12:13.889 "data_offset": 0, 00:12:13.889 "data_size": 65536 00:12:13.889 } 00:12:13.889 ] 00:12:13.889 }' 00:12:13.889 03:21:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.889 03:21:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:13.889 03:21:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:14.148 03:21:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:14.148 03:21:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:14.148 03:21:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.148 03:21:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.148 [2024-11-21 03:21:01.475684] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:14.148 [2024-11-21 03:21:01.537343] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:14.148 [2024-11-21 03:21:01.537441] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.148 [2024-11-21 03:21:01.537456] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:14.148 [2024-11-21 03:21:01.537465] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:14.148 03:21:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.148 03:21:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:14.148 03:21:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.148 03:21:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.148 03:21:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.148 03:21:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.148 03:21:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:14.148 03:21:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.148 03:21:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.148 03:21:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.148 03:21:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.148 03:21:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.148 03:21:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.148 03:21:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.148 03:21:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.149 03:21:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.149 03:21:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.149 "name": "raid_bdev1", 00:12:14.149 "uuid": "bfe3cd71-ae91-4018-8ccf-32b54f4bcc03", 00:12:14.149 "strip_size_kb": 0, 00:12:14.149 "state": "online", 00:12:14.149 "raid_level": "raid1", 00:12:14.149 "superblock": false, 00:12:14.149 "num_base_bdevs": 2, 00:12:14.149 "num_base_bdevs_discovered": 1, 00:12:14.149 "num_base_bdevs_operational": 1, 00:12:14.149 "base_bdevs_list": [ 00:12:14.149 { 00:12:14.149 "name": null, 00:12:14.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.149 "is_configured": false, 00:12:14.149 "data_offset": 0, 00:12:14.149 "data_size": 65536 00:12:14.149 }, 00:12:14.149 { 00:12:14.149 "name": "BaseBdev2", 00:12:14.149 "uuid": "a3cb7cf3-6a24-5b7e-8e77-cd95081d686c", 00:12:14.149 "is_configured": true, 00:12:14.149 "data_offset": 0, 00:12:14.149 "data_size": 65536 00:12:14.149 } 00:12:14.149 ] 00:12:14.149 }' 00:12:14.149 03:21:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.149 03:21:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.718 03:21:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:14.718 03:21:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:14.718 03:21:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:14.718 03:21:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:14.718 03:21:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:14.718 03:21:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.718 03:21:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.718 03:21:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.718 03:21:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.718 03:21:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.718 03:21:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:14.718 "name": "raid_bdev1", 00:12:14.718 "uuid": "bfe3cd71-ae91-4018-8ccf-32b54f4bcc03", 00:12:14.718 "strip_size_kb": 0, 00:12:14.718 "state": "online", 00:12:14.718 "raid_level": "raid1", 00:12:14.718 "superblock": false, 00:12:14.718 "num_base_bdevs": 2, 00:12:14.718 "num_base_bdevs_discovered": 1, 00:12:14.718 "num_base_bdevs_operational": 1, 00:12:14.718 "base_bdevs_list": [ 00:12:14.718 { 00:12:14.718 "name": null, 00:12:14.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.718 "is_configured": false, 00:12:14.718 "data_offset": 0, 00:12:14.718 "data_size": 65536 00:12:14.718 }, 00:12:14.718 { 00:12:14.718 "name": "BaseBdev2", 00:12:14.718 "uuid": "a3cb7cf3-6a24-5b7e-8e77-cd95081d686c", 00:12:14.718 "is_configured": true, 00:12:14.718 "data_offset": 0, 00:12:14.718 "data_size": 65536 00:12:14.718 } 00:12:14.718 ] 00:12:14.718 }' 00:12:14.718 03:21:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:14.718 03:21:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:14.718 03:21:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:14.718 03:21:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:14.718 03:21:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:14.718 03:21:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.718 03:21:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.718 [2024-11-21 03:21:02.106882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:14.718 [2024-11-21 03:21:02.112012] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0a0b0 00:12:14.718 03:21:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.718 03:21:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:14.718 [2024-11-21 03:21:02.114110] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:15.658 03:21:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:15.658 03:21:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:15.658 03:21:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:15.658 03:21:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:15.658 03:21:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:15.658 03:21:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.658 03:21:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.658 03:21:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.658 03:21:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.658 03:21:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.658 03:21:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:15.658 "name": "raid_bdev1", 00:12:15.658 "uuid": "bfe3cd71-ae91-4018-8ccf-32b54f4bcc03", 00:12:15.658 "strip_size_kb": 0, 00:12:15.658 "state": "online", 00:12:15.658 "raid_level": "raid1", 00:12:15.658 "superblock": false, 00:12:15.658 "num_base_bdevs": 2, 00:12:15.658 "num_base_bdevs_discovered": 2, 00:12:15.658 "num_base_bdevs_operational": 2, 00:12:15.658 "process": { 00:12:15.658 "type": "rebuild", 00:12:15.658 "target": "spare", 00:12:15.658 "progress": { 00:12:15.658 "blocks": 20480, 00:12:15.658 "percent": 31 00:12:15.658 } 00:12:15.658 }, 00:12:15.658 "base_bdevs_list": [ 00:12:15.658 { 00:12:15.658 "name": "spare", 00:12:15.658 "uuid": "b77c5d45-d3d7-5d3b-992c-1bc15a9da578", 00:12:15.658 "is_configured": true, 00:12:15.658 "data_offset": 0, 00:12:15.658 "data_size": 65536 00:12:15.658 }, 00:12:15.658 { 00:12:15.658 "name": "BaseBdev2", 00:12:15.658 "uuid": "a3cb7cf3-6a24-5b7e-8e77-cd95081d686c", 00:12:15.658 "is_configured": true, 00:12:15.658 "data_offset": 0, 00:12:15.658 "data_size": 65536 00:12:15.658 } 00:12:15.658 ] 00:12:15.658 }' 00:12:15.658 03:21:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:15.658 03:21:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:15.658 03:21:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:15.918 03:21:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:15.918 03:21:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:15.918 03:21:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:15.918 03:21:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:15.918 03:21:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:15.918 03:21:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=298 00:12:15.918 03:21:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:15.918 03:21:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:15.918 03:21:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:15.918 03:21:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:15.918 03:21:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:15.918 03:21:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:15.918 03:21:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.918 03:21:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.918 03:21:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.918 03:21:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.918 03:21:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.918 03:21:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:15.918 "name": "raid_bdev1", 00:12:15.918 "uuid": "bfe3cd71-ae91-4018-8ccf-32b54f4bcc03", 00:12:15.918 "strip_size_kb": 0, 00:12:15.918 "state": "online", 00:12:15.918 "raid_level": "raid1", 00:12:15.918 "superblock": false, 00:12:15.918 "num_base_bdevs": 2, 00:12:15.918 "num_base_bdevs_discovered": 2, 00:12:15.918 "num_base_bdevs_operational": 2, 00:12:15.918 "process": { 00:12:15.918 "type": "rebuild", 00:12:15.918 "target": "spare", 00:12:15.918 "progress": { 00:12:15.918 "blocks": 22528, 00:12:15.918 "percent": 34 00:12:15.918 } 00:12:15.918 }, 00:12:15.918 "base_bdevs_list": [ 00:12:15.918 { 00:12:15.918 "name": "spare", 00:12:15.918 "uuid": "b77c5d45-d3d7-5d3b-992c-1bc15a9da578", 00:12:15.918 "is_configured": true, 00:12:15.918 "data_offset": 0, 00:12:15.918 "data_size": 65536 00:12:15.918 }, 00:12:15.918 { 00:12:15.918 "name": "BaseBdev2", 00:12:15.918 "uuid": "a3cb7cf3-6a24-5b7e-8e77-cd95081d686c", 00:12:15.918 "is_configured": true, 00:12:15.918 "data_offset": 0, 00:12:15.918 "data_size": 65536 00:12:15.918 } 00:12:15.918 ] 00:12:15.918 }' 00:12:15.918 03:21:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:15.918 03:21:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:15.918 03:21:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:15.918 03:21:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:15.918 03:21:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:16.859 03:21:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:16.859 03:21:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:16.859 03:21:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:16.859 03:21:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:16.859 03:21:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:16.859 03:21:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:16.859 03:21:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.859 03:21:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.859 03:21:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.859 03:21:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.120 03:21:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.120 03:21:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:17.120 "name": "raid_bdev1", 00:12:17.120 "uuid": "bfe3cd71-ae91-4018-8ccf-32b54f4bcc03", 00:12:17.120 "strip_size_kb": 0, 00:12:17.120 "state": "online", 00:12:17.120 "raid_level": "raid1", 00:12:17.120 "superblock": false, 00:12:17.120 "num_base_bdevs": 2, 00:12:17.120 "num_base_bdevs_discovered": 2, 00:12:17.120 "num_base_bdevs_operational": 2, 00:12:17.120 "process": { 00:12:17.120 "type": "rebuild", 00:12:17.120 "target": "spare", 00:12:17.120 "progress": { 00:12:17.120 "blocks": 45056, 00:12:17.120 "percent": 68 00:12:17.120 } 00:12:17.120 }, 00:12:17.120 "base_bdevs_list": [ 00:12:17.120 { 00:12:17.120 "name": "spare", 00:12:17.120 "uuid": "b77c5d45-d3d7-5d3b-992c-1bc15a9da578", 00:12:17.120 "is_configured": true, 00:12:17.120 "data_offset": 0, 00:12:17.120 "data_size": 65536 00:12:17.120 }, 00:12:17.120 { 00:12:17.120 "name": "BaseBdev2", 00:12:17.120 "uuid": "a3cb7cf3-6a24-5b7e-8e77-cd95081d686c", 00:12:17.120 "is_configured": true, 00:12:17.120 "data_offset": 0, 00:12:17.120 "data_size": 65536 00:12:17.120 } 00:12:17.120 ] 00:12:17.120 }' 00:12:17.120 03:21:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:17.120 03:21:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:17.120 03:21:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:17.120 03:21:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:17.120 03:21:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:18.061 [2024-11-21 03:21:05.332665] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:18.061 [2024-11-21 03:21:05.332746] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:18.061 [2024-11-21 03:21:05.332797] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.061 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:18.061 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:18.061 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:18.061 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:18.061 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:18.061 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:18.061 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.061 03:21:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.061 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.061 03:21:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.061 03:21:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.061 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:18.061 "name": "raid_bdev1", 00:12:18.061 "uuid": "bfe3cd71-ae91-4018-8ccf-32b54f4bcc03", 00:12:18.061 "strip_size_kb": 0, 00:12:18.061 "state": "online", 00:12:18.061 "raid_level": "raid1", 00:12:18.061 "superblock": false, 00:12:18.061 "num_base_bdevs": 2, 00:12:18.061 "num_base_bdevs_discovered": 2, 00:12:18.061 "num_base_bdevs_operational": 2, 00:12:18.061 "base_bdevs_list": [ 00:12:18.061 { 00:12:18.061 "name": "spare", 00:12:18.061 "uuid": "b77c5d45-d3d7-5d3b-992c-1bc15a9da578", 00:12:18.061 "is_configured": true, 00:12:18.061 "data_offset": 0, 00:12:18.061 "data_size": 65536 00:12:18.061 }, 00:12:18.061 { 00:12:18.061 "name": "BaseBdev2", 00:12:18.061 "uuid": "a3cb7cf3-6a24-5b7e-8e77-cd95081d686c", 00:12:18.061 "is_configured": true, 00:12:18.061 "data_offset": 0, 00:12:18.061 "data_size": 65536 00:12:18.061 } 00:12:18.061 ] 00:12:18.061 }' 00:12:18.061 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:18.322 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:18.322 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:18.322 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:18.322 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:18.322 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:18.322 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:18.322 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:18.322 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:18.322 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:18.322 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.322 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.322 03:21:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.322 03:21:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.322 03:21:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.322 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:18.322 "name": "raid_bdev1", 00:12:18.322 "uuid": "bfe3cd71-ae91-4018-8ccf-32b54f4bcc03", 00:12:18.322 "strip_size_kb": 0, 00:12:18.322 "state": "online", 00:12:18.322 "raid_level": "raid1", 00:12:18.322 "superblock": false, 00:12:18.322 "num_base_bdevs": 2, 00:12:18.322 "num_base_bdevs_discovered": 2, 00:12:18.322 "num_base_bdevs_operational": 2, 00:12:18.322 "base_bdevs_list": [ 00:12:18.322 { 00:12:18.322 "name": "spare", 00:12:18.322 "uuid": "b77c5d45-d3d7-5d3b-992c-1bc15a9da578", 00:12:18.322 "is_configured": true, 00:12:18.322 "data_offset": 0, 00:12:18.322 "data_size": 65536 00:12:18.322 }, 00:12:18.322 { 00:12:18.322 "name": "BaseBdev2", 00:12:18.322 "uuid": "a3cb7cf3-6a24-5b7e-8e77-cd95081d686c", 00:12:18.322 "is_configured": true, 00:12:18.322 "data_offset": 0, 00:12:18.322 "data_size": 65536 00:12:18.322 } 00:12:18.322 ] 00:12:18.322 }' 00:12:18.322 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:18.322 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:18.322 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:18.322 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:18.322 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:18.322 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.322 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.322 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.322 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.322 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:18.322 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.322 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.322 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.322 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.322 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.322 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.322 03:21:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.322 03:21:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.322 03:21:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.581 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.581 "name": "raid_bdev1", 00:12:18.581 "uuid": "bfe3cd71-ae91-4018-8ccf-32b54f4bcc03", 00:12:18.581 "strip_size_kb": 0, 00:12:18.581 "state": "online", 00:12:18.581 "raid_level": "raid1", 00:12:18.581 "superblock": false, 00:12:18.581 "num_base_bdevs": 2, 00:12:18.581 "num_base_bdevs_discovered": 2, 00:12:18.581 "num_base_bdevs_operational": 2, 00:12:18.581 "base_bdevs_list": [ 00:12:18.581 { 00:12:18.581 "name": "spare", 00:12:18.581 "uuid": "b77c5d45-d3d7-5d3b-992c-1bc15a9da578", 00:12:18.581 "is_configured": true, 00:12:18.581 "data_offset": 0, 00:12:18.581 "data_size": 65536 00:12:18.581 }, 00:12:18.581 { 00:12:18.581 "name": "BaseBdev2", 00:12:18.581 "uuid": "a3cb7cf3-6a24-5b7e-8e77-cd95081d686c", 00:12:18.581 "is_configured": true, 00:12:18.581 "data_offset": 0, 00:12:18.581 "data_size": 65536 00:12:18.581 } 00:12:18.581 ] 00:12:18.581 }' 00:12:18.581 03:21:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.581 03:21:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.841 03:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:18.841 03:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.841 03:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.841 [2024-11-21 03:21:06.309979] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:18.841 [2024-11-21 03:21:06.310123] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:18.841 [2024-11-21 03:21:06.310266] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.841 [2024-11-21 03:21:06.310368] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:18.841 [2024-11-21 03:21:06.310442] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:12:18.841 03:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.841 03:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.841 03:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.841 03:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.841 03:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:18.841 03:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.841 03:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:18.841 03:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:18.841 03:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:18.841 03:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:18.841 03:21:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:18.841 03:21:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:18.841 03:21:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:18.841 03:21:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:18.841 03:21:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:18.841 03:21:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:18.841 03:21:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:18.841 03:21:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:18.841 03:21:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:19.102 /dev/nbd0 00:12:19.102 03:21:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:19.102 03:21:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:19.102 03:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:19.102 03:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:19.102 03:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:19.102 03:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:19.102 03:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:19.102 03:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:19.102 03:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:19.102 03:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:19.102 03:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.102 1+0 records in 00:12:19.102 1+0 records out 00:12:19.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00057903 s, 7.1 MB/s 00:12:19.102 03:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.102 03:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:19.102 03:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.102 03:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:19.102 03:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:19.102 03:21:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:19.102 03:21:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:19.102 03:21:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:19.363 /dev/nbd1 00:12:19.363 03:21:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:19.363 03:21:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:19.363 03:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:19.363 03:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:19.363 03:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:19.363 03:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:19.363 03:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:19.363 03:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:19.363 03:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:19.363 03:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:19.363 03:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.363 1+0 records in 00:12:19.363 1+0 records out 00:12:19.363 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282538 s, 14.5 MB/s 00:12:19.363 03:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.363 03:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:19.363 03:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.363 03:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:19.363 03:21:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:19.363 03:21:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:19.363 03:21:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:19.363 03:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:19.623 03:21:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:19.623 03:21:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:19.623 03:21:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:19.623 03:21:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:19.623 03:21:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:19.623 03:21:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:19.623 03:21:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:19.623 03:21:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:19.623 03:21:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:19.623 03:21:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:19.623 03:21:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:19.623 03:21:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:19.623 03:21:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:19.623 03:21:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:19.623 03:21:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:19.623 03:21:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:19.623 03:21:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:19.883 03:21:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:19.883 03:21:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:19.883 03:21:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:19.883 03:21:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:19.883 03:21:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:19.884 03:21:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:19.884 03:21:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:19.884 03:21:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:19.884 03:21:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:19.884 03:21:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 88078 00:12:19.884 03:21:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 88078 ']' 00:12:19.884 03:21:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 88078 00:12:19.884 03:21:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:12:19.884 03:21:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:19.884 03:21:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88078 00:12:19.884 killing process with pid 88078 00:12:19.884 Received shutdown signal, test time was about 60.000000 seconds 00:12:19.884 00:12:19.884 Latency(us) 00:12:19.884 [2024-11-21T03:21:07.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:19.884 [2024-11-21T03:21:07.450Z] =================================================================================================================== 00:12:19.884 [2024-11-21T03:21:07.450Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:19.884 03:21:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:19.884 03:21:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:19.884 03:21:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88078' 00:12:19.884 03:21:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 88078 00:12:19.884 [2024-11-21 03:21:07.440200] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:19.884 03:21:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 88078 00:12:20.144 [2024-11-21 03:21:07.471893] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:20.144 03:21:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:20.144 00:12:20.144 real 0m13.992s 00:12:20.144 user 0m16.192s 00:12:20.144 sys 0m3.007s 00:12:20.144 03:21:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.144 ************************************ 00:12:20.144 END TEST raid_rebuild_test 00:12:20.144 ************************************ 00:12:20.144 03:21:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.420 03:21:07 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:20.420 03:21:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:20.420 03:21:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.420 03:21:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:20.420 ************************************ 00:12:20.420 START TEST raid_rebuild_test_sb 00:12:20.420 ************************************ 00:12:20.420 03:21:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:12:20.420 03:21:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:20.420 03:21:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:20.420 03:21:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:20.420 03:21:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:20.420 03:21:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:20.420 03:21:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:20.420 03:21:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:20.420 03:21:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:20.420 03:21:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:20.420 03:21:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:20.420 03:21:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:20.420 03:21:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:20.420 03:21:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:20.420 03:21:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:20.420 03:21:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:20.420 03:21:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:20.420 03:21:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:20.420 03:21:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:20.420 03:21:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:20.420 03:21:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:20.420 03:21:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:20.420 03:21:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:20.420 03:21:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:20.420 03:21:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:20.420 03:21:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=88485 00:12:20.420 03:21:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:20.420 03:21:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 88485 00:12:20.420 03:21:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 88485 ']' 00:12:20.420 03:21:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.420 03:21:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:20.420 03:21:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.420 03:21:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:20.420 03:21:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.420 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:20.420 Zero copy mechanism will not be used. 00:12:20.420 [2024-11-21 03:21:07.863345] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:12:20.420 [2024-11-21 03:21:07.863479] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88485 ] 00:12:20.693 [2024-11-21 03:21:08.003419] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:20.693 [2024-11-21 03:21:08.028326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.693 [2024-11-21 03:21:08.058281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.693 [2024-11-21 03:21:08.102692] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.693 [2024-11-21 03:21:08.102731] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:21.263 03:21:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:21.263 03:21:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:21.263 03:21:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:21.263 03:21:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:21.263 03:21:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.263 03:21:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.263 BaseBdev1_malloc 00:12:21.263 03:21:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.263 03:21:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:21.263 03:21:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.263 03:21:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.263 [2024-11-21 03:21:08.727078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:21.263 [2024-11-21 03:21:08.727165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.263 [2024-11-21 03:21:08.727194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:21.263 [2024-11-21 03:21:08.727216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.263 [2024-11-21 03:21:08.729358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.263 [2024-11-21 03:21:08.729398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:21.263 BaseBdev1 00:12:21.263 03:21:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.263 03:21:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:21.263 03:21:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:21.263 03:21:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.263 03:21:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.263 BaseBdev2_malloc 00:12:21.263 03:21:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.263 03:21:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:21.263 03:21:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.263 03:21:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.263 [2024-11-21 03:21:08.756201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:21.263 [2024-11-21 03:21:08.756353] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.263 [2024-11-21 03:21:08.756377] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:21.263 [2024-11-21 03:21:08.756388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.263 [2024-11-21 03:21:08.758506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.263 [2024-11-21 03:21:08.758550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:21.263 BaseBdev2 00:12:21.263 03:21:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.263 03:21:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:21.263 03:21:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.264 03:21:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.264 spare_malloc 00:12:21.264 03:21:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.264 03:21:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:21.264 03:21:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.264 03:21:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.264 spare_delay 00:12:21.264 03:21:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.264 03:21:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:21.264 03:21:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.264 03:21:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.264 [2024-11-21 03:21:08.797077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:21.264 [2024-11-21 03:21:08.797171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.264 [2024-11-21 03:21:08.797196] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:21.264 [2024-11-21 03:21:08.797211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.264 [2024-11-21 03:21:08.799610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.264 [2024-11-21 03:21:08.799697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:21.264 spare 00:12:21.264 03:21:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.264 03:21:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:21.264 03:21:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.264 03:21:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.264 [2024-11-21 03:21:08.809147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:21.264 [2024-11-21 03:21:08.811053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:21.264 [2024-11-21 03:21:08.811213] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:12:21.264 [2024-11-21 03:21:08.811228] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:21.264 [2024-11-21 03:21:08.811502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:21.264 [2024-11-21 03:21:08.811655] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:12:21.264 [2024-11-21 03:21:08.811665] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:12:21.264 [2024-11-21 03:21:08.811794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.264 03:21:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.264 03:21:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:21.264 03:21:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.264 03:21:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.264 03:21:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.264 03:21:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.264 03:21:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:21.264 03:21:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.264 03:21:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.264 03:21:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.264 03:21:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.264 03:21:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.264 03:21:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.264 03:21:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.264 03:21:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.523 03:21:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.523 03:21:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.524 "name": "raid_bdev1", 00:12:21.524 "uuid": "b67c57a0-376c-4b6b-adae-af1ba7b947b3", 00:12:21.524 "strip_size_kb": 0, 00:12:21.524 "state": "online", 00:12:21.524 "raid_level": "raid1", 00:12:21.524 "superblock": true, 00:12:21.524 "num_base_bdevs": 2, 00:12:21.524 "num_base_bdevs_discovered": 2, 00:12:21.524 "num_base_bdevs_operational": 2, 00:12:21.524 "base_bdevs_list": [ 00:12:21.524 { 00:12:21.524 "name": "BaseBdev1", 00:12:21.524 "uuid": "a8dc54dc-2895-5190-8757-f24c7d02e0dc", 00:12:21.524 "is_configured": true, 00:12:21.524 "data_offset": 2048, 00:12:21.524 "data_size": 63488 00:12:21.524 }, 00:12:21.524 { 00:12:21.524 "name": "BaseBdev2", 00:12:21.524 "uuid": "beb150e1-0d72-5f78-bc91-b4b4fe22b907", 00:12:21.524 "is_configured": true, 00:12:21.524 "data_offset": 2048, 00:12:21.524 "data_size": 63488 00:12:21.524 } 00:12:21.524 ] 00:12:21.524 }' 00:12:21.524 03:21:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.524 03:21:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.791 03:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:21.791 03:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:21.791 03:21:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.791 03:21:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.791 [2024-11-21 03:21:09.301571] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:21.791 03:21:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.791 03:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:21.791 03:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.791 03:21:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.791 03:21:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.791 03:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:21.791 03:21:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.051 03:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:22.051 03:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:22.051 03:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:22.051 03:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:22.051 03:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:22.051 03:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:22.051 03:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:22.051 03:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:22.051 03:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:22.051 03:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:22.051 03:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:22.051 03:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:22.051 03:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:22.051 03:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:22.051 [2024-11-21 03:21:09.573413] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:22.051 /dev/nbd0 00:12:22.051 03:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:22.051 03:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:22.051 03:21:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:22.051 03:21:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:22.051 03:21:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:22.051 03:21:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:22.051 03:21:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:22.310 03:21:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:22.310 03:21:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:22.310 03:21:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:22.310 03:21:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:22.310 1+0 records in 00:12:22.310 1+0 records out 00:12:22.310 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254127 s, 16.1 MB/s 00:12:22.310 03:21:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.310 03:21:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:22.310 03:21:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.310 03:21:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:22.310 03:21:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:22.310 03:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:22.310 03:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:22.310 03:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:22.310 03:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:22.310 03:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:26.502 63488+0 records in 00:12:26.502 63488+0 records out 00:12:26.502 32505856 bytes (33 MB, 31 MiB) copied, 3.92257 s, 8.3 MB/s 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:26.502 [2024-11-21 03:21:13.783227] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.502 [2024-11-21 03:21:13.796845] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.502 03:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.503 "name": "raid_bdev1", 00:12:26.503 "uuid": "b67c57a0-376c-4b6b-adae-af1ba7b947b3", 00:12:26.503 "strip_size_kb": 0, 00:12:26.503 "state": "online", 00:12:26.503 "raid_level": "raid1", 00:12:26.503 "superblock": true, 00:12:26.503 "num_base_bdevs": 2, 00:12:26.503 "num_base_bdevs_discovered": 1, 00:12:26.503 "num_base_bdevs_operational": 1, 00:12:26.503 "base_bdevs_list": [ 00:12:26.503 { 00:12:26.503 "name": null, 00:12:26.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.503 "is_configured": false, 00:12:26.503 "data_offset": 0, 00:12:26.503 "data_size": 63488 00:12:26.503 }, 00:12:26.503 { 00:12:26.503 "name": "BaseBdev2", 00:12:26.503 "uuid": "beb150e1-0d72-5f78-bc91-b4b4fe22b907", 00:12:26.503 "is_configured": true, 00:12:26.503 "data_offset": 2048, 00:12:26.503 "data_size": 63488 00:12:26.503 } 00:12:26.503 ] 00:12:26.503 }' 00:12:26.503 03:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.503 03:21:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.762 03:21:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:26.762 03:21:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.762 03:21:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.762 [2024-11-21 03:21:14.252974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:26.762 [2024-11-21 03:21:14.269052] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3770 00:12:26.762 03:21:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.762 03:21:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:26.762 [2024-11-21 03:21:14.271530] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.139 "name": "raid_bdev1", 00:12:28.139 "uuid": "b67c57a0-376c-4b6b-adae-af1ba7b947b3", 00:12:28.139 "strip_size_kb": 0, 00:12:28.139 "state": "online", 00:12:28.139 "raid_level": "raid1", 00:12:28.139 "superblock": true, 00:12:28.139 "num_base_bdevs": 2, 00:12:28.139 "num_base_bdevs_discovered": 2, 00:12:28.139 "num_base_bdevs_operational": 2, 00:12:28.139 "process": { 00:12:28.139 "type": "rebuild", 00:12:28.139 "target": "spare", 00:12:28.139 "progress": { 00:12:28.139 "blocks": 20480, 00:12:28.139 "percent": 32 00:12:28.139 } 00:12:28.139 }, 00:12:28.139 "base_bdevs_list": [ 00:12:28.139 { 00:12:28.139 "name": "spare", 00:12:28.139 "uuid": "06159c3f-552c-55bf-b66b-31cc4c090717", 00:12:28.139 "is_configured": true, 00:12:28.139 "data_offset": 2048, 00:12:28.139 "data_size": 63488 00:12:28.139 }, 00:12:28.139 { 00:12:28.139 "name": "BaseBdev2", 00:12:28.139 "uuid": "beb150e1-0d72-5f78-bc91-b4b4fe22b907", 00:12:28.139 "is_configured": true, 00:12:28.139 "data_offset": 2048, 00:12:28.139 "data_size": 63488 00:12:28.139 } 00:12:28.139 ] 00:12:28.139 }' 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.139 [2024-11-21 03:21:15.425151] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:28.139 [2024-11-21 03:21:15.479364] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:28.139 [2024-11-21 03:21:15.479529] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.139 [2024-11-21 03:21:15.479570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:28.139 [2024-11-21 03:21:15.479596] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.139 "name": "raid_bdev1", 00:12:28.139 "uuid": "b67c57a0-376c-4b6b-adae-af1ba7b947b3", 00:12:28.139 "strip_size_kb": 0, 00:12:28.139 "state": "online", 00:12:28.139 "raid_level": "raid1", 00:12:28.139 "superblock": true, 00:12:28.139 "num_base_bdevs": 2, 00:12:28.139 "num_base_bdevs_discovered": 1, 00:12:28.139 "num_base_bdevs_operational": 1, 00:12:28.139 "base_bdevs_list": [ 00:12:28.139 { 00:12:28.139 "name": null, 00:12:28.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.139 "is_configured": false, 00:12:28.139 "data_offset": 0, 00:12:28.139 "data_size": 63488 00:12:28.139 }, 00:12:28.139 { 00:12:28.139 "name": "BaseBdev2", 00:12:28.139 "uuid": "beb150e1-0d72-5f78-bc91-b4b4fe22b907", 00:12:28.139 "is_configured": true, 00:12:28.139 "data_offset": 2048, 00:12:28.139 "data_size": 63488 00:12:28.139 } 00:12:28.139 ] 00:12:28.139 }' 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.139 03:21:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.399 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:28.399 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.399 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:28.399 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:28.399 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.399 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.399 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.399 03:21:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.399 03:21:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.399 03:21:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.399 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.399 "name": "raid_bdev1", 00:12:28.399 "uuid": "b67c57a0-376c-4b6b-adae-af1ba7b947b3", 00:12:28.399 "strip_size_kb": 0, 00:12:28.399 "state": "online", 00:12:28.399 "raid_level": "raid1", 00:12:28.399 "superblock": true, 00:12:28.399 "num_base_bdevs": 2, 00:12:28.399 "num_base_bdevs_discovered": 1, 00:12:28.399 "num_base_bdevs_operational": 1, 00:12:28.399 "base_bdevs_list": [ 00:12:28.399 { 00:12:28.399 "name": null, 00:12:28.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.399 "is_configured": false, 00:12:28.399 "data_offset": 0, 00:12:28.399 "data_size": 63488 00:12:28.399 }, 00:12:28.399 { 00:12:28.399 "name": "BaseBdev2", 00:12:28.399 "uuid": "beb150e1-0d72-5f78-bc91-b4b4fe22b907", 00:12:28.399 "is_configured": true, 00:12:28.399 "data_offset": 2048, 00:12:28.399 "data_size": 63488 00:12:28.399 } 00:12:28.399 ] 00:12:28.399 }' 00:12:28.399 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.661 03:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:28.661 03:21:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.661 03:21:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:28.661 03:21:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:28.661 03:21:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.661 03:21:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.661 [2024-11-21 03:21:16.048995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:28.661 [2024-11-21 03:21:16.054199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3840 00:12:28.661 03:21:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.661 03:21:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:28.661 [2024-11-21 03:21:16.056229] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:29.602 03:21:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:29.602 03:21:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.602 03:21:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:29.602 03:21:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:29.602 03:21:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.602 03:21:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.602 03:21:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.602 03:21:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.602 03:21:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.602 03:21:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.602 03:21:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.602 "name": "raid_bdev1", 00:12:29.602 "uuid": "b67c57a0-376c-4b6b-adae-af1ba7b947b3", 00:12:29.602 "strip_size_kb": 0, 00:12:29.602 "state": "online", 00:12:29.602 "raid_level": "raid1", 00:12:29.602 "superblock": true, 00:12:29.602 "num_base_bdevs": 2, 00:12:29.602 "num_base_bdevs_discovered": 2, 00:12:29.602 "num_base_bdevs_operational": 2, 00:12:29.602 "process": { 00:12:29.602 "type": "rebuild", 00:12:29.602 "target": "spare", 00:12:29.602 "progress": { 00:12:29.602 "blocks": 20480, 00:12:29.602 "percent": 32 00:12:29.602 } 00:12:29.602 }, 00:12:29.602 "base_bdevs_list": [ 00:12:29.602 { 00:12:29.602 "name": "spare", 00:12:29.602 "uuid": "06159c3f-552c-55bf-b66b-31cc4c090717", 00:12:29.602 "is_configured": true, 00:12:29.602 "data_offset": 2048, 00:12:29.602 "data_size": 63488 00:12:29.602 }, 00:12:29.602 { 00:12:29.602 "name": "BaseBdev2", 00:12:29.602 "uuid": "beb150e1-0d72-5f78-bc91-b4b4fe22b907", 00:12:29.602 "is_configured": true, 00:12:29.602 "data_offset": 2048, 00:12:29.602 "data_size": 63488 00:12:29.602 } 00:12:29.602 ] 00:12:29.602 }' 00:12:29.602 03:21:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.602 03:21:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:29.602 03:21:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.862 03:21:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:29.862 03:21:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:29.862 03:21:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:29.862 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:29.862 03:21:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:29.862 03:21:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:29.862 03:21:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:29.862 03:21:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=312 00:12:29.862 03:21:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:29.862 03:21:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:29.862 03:21:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.862 03:21:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:29.862 03:21:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:29.862 03:21:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.862 03:21:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.862 03:21:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.862 03:21:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.862 03:21:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.862 03:21:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.862 03:21:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.862 "name": "raid_bdev1", 00:12:29.862 "uuid": "b67c57a0-376c-4b6b-adae-af1ba7b947b3", 00:12:29.862 "strip_size_kb": 0, 00:12:29.862 "state": "online", 00:12:29.862 "raid_level": "raid1", 00:12:29.862 "superblock": true, 00:12:29.862 "num_base_bdevs": 2, 00:12:29.862 "num_base_bdevs_discovered": 2, 00:12:29.862 "num_base_bdevs_operational": 2, 00:12:29.862 "process": { 00:12:29.862 "type": "rebuild", 00:12:29.862 "target": "spare", 00:12:29.862 "progress": { 00:12:29.862 "blocks": 22528, 00:12:29.862 "percent": 35 00:12:29.862 } 00:12:29.862 }, 00:12:29.862 "base_bdevs_list": [ 00:12:29.862 { 00:12:29.862 "name": "spare", 00:12:29.862 "uuid": "06159c3f-552c-55bf-b66b-31cc4c090717", 00:12:29.862 "is_configured": true, 00:12:29.862 "data_offset": 2048, 00:12:29.862 "data_size": 63488 00:12:29.862 }, 00:12:29.862 { 00:12:29.862 "name": "BaseBdev2", 00:12:29.862 "uuid": "beb150e1-0d72-5f78-bc91-b4b4fe22b907", 00:12:29.862 "is_configured": true, 00:12:29.862 "data_offset": 2048, 00:12:29.862 "data_size": 63488 00:12:29.862 } 00:12:29.862 ] 00:12:29.862 }' 00:12:29.862 03:21:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.862 03:21:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:29.862 03:21:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.862 03:21:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:29.862 03:21:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:30.801 03:21:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:30.801 03:21:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:30.801 03:21:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.801 03:21:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:30.801 03:21:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:30.801 03:21:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.801 03:21:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.801 03:21:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.801 03:21:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.801 03:21:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.801 03:21:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.801 03:21:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.801 "name": "raid_bdev1", 00:12:30.801 "uuid": "b67c57a0-376c-4b6b-adae-af1ba7b947b3", 00:12:30.801 "strip_size_kb": 0, 00:12:30.801 "state": "online", 00:12:30.801 "raid_level": "raid1", 00:12:30.801 "superblock": true, 00:12:30.801 "num_base_bdevs": 2, 00:12:30.801 "num_base_bdevs_discovered": 2, 00:12:30.801 "num_base_bdevs_operational": 2, 00:12:30.801 "process": { 00:12:30.801 "type": "rebuild", 00:12:30.801 "target": "spare", 00:12:30.801 "progress": { 00:12:30.801 "blocks": 45056, 00:12:30.801 "percent": 70 00:12:30.801 } 00:12:30.801 }, 00:12:30.801 "base_bdevs_list": [ 00:12:30.801 { 00:12:30.801 "name": "spare", 00:12:30.801 "uuid": "06159c3f-552c-55bf-b66b-31cc4c090717", 00:12:30.801 "is_configured": true, 00:12:30.801 "data_offset": 2048, 00:12:30.801 "data_size": 63488 00:12:30.801 }, 00:12:30.801 { 00:12:30.801 "name": "BaseBdev2", 00:12:30.801 "uuid": "beb150e1-0d72-5f78-bc91-b4b4fe22b907", 00:12:30.801 "is_configured": true, 00:12:30.801 "data_offset": 2048, 00:12:30.801 "data_size": 63488 00:12:30.801 } 00:12:30.801 ] 00:12:30.801 }' 00:12:30.801 03:21:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.061 03:21:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:31.061 03:21:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.061 03:21:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:31.061 03:21:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:31.630 [2024-11-21 03:21:19.174673] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:31.630 [2024-11-21 03:21:19.174862] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:31.630 [2024-11-21 03:21:19.175044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.201 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:32.201 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:32.201 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:32.201 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:32.201 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:32.201 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:32.201 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.201 03:21:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.201 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.201 03:21:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.201 03:21:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.201 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:32.201 "name": "raid_bdev1", 00:12:32.201 "uuid": "b67c57a0-376c-4b6b-adae-af1ba7b947b3", 00:12:32.201 "strip_size_kb": 0, 00:12:32.201 "state": "online", 00:12:32.201 "raid_level": "raid1", 00:12:32.201 "superblock": true, 00:12:32.201 "num_base_bdevs": 2, 00:12:32.201 "num_base_bdevs_discovered": 2, 00:12:32.201 "num_base_bdevs_operational": 2, 00:12:32.201 "base_bdevs_list": [ 00:12:32.201 { 00:12:32.201 "name": "spare", 00:12:32.201 "uuid": "06159c3f-552c-55bf-b66b-31cc4c090717", 00:12:32.201 "is_configured": true, 00:12:32.201 "data_offset": 2048, 00:12:32.201 "data_size": 63488 00:12:32.201 }, 00:12:32.201 { 00:12:32.201 "name": "BaseBdev2", 00:12:32.201 "uuid": "beb150e1-0d72-5f78-bc91-b4b4fe22b907", 00:12:32.201 "is_configured": true, 00:12:32.201 "data_offset": 2048, 00:12:32.201 "data_size": 63488 00:12:32.201 } 00:12:32.201 ] 00:12:32.201 }' 00:12:32.201 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:32.201 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:32.201 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.201 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:32.201 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:32.201 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:32.201 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:32.201 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:32.201 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:32.201 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:32.201 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.202 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.202 03:21:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.202 03:21:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.202 03:21:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.202 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:32.202 "name": "raid_bdev1", 00:12:32.202 "uuid": "b67c57a0-376c-4b6b-adae-af1ba7b947b3", 00:12:32.202 "strip_size_kb": 0, 00:12:32.202 "state": "online", 00:12:32.202 "raid_level": "raid1", 00:12:32.202 "superblock": true, 00:12:32.202 "num_base_bdevs": 2, 00:12:32.202 "num_base_bdevs_discovered": 2, 00:12:32.202 "num_base_bdevs_operational": 2, 00:12:32.202 "base_bdevs_list": [ 00:12:32.202 { 00:12:32.202 "name": "spare", 00:12:32.202 "uuid": "06159c3f-552c-55bf-b66b-31cc4c090717", 00:12:32.202 "is_configured": true, 00:12:32.202 "data_offset": 2048, 00:12:32.202 "data_size": 63488 00:12:32.202 }, 00:12:32.202 { 00:12:32.202 "name": "BaseBdev2", 00:12:32.202 "uuid": "beb150e1-0d72-5f78-bc91-b4b4fe22b907", 00:12:32.202 "is_configured": true, 00:12:32.202 "data_offset": 2048, 00:12:32.202 "data_size": 63488 00:12:32.202 } 00:12:32.202 ] 00:12:32.202 }' 00:12:32.202 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:32.202 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:32.202 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.202 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:32.202 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:32.202 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.202 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.202 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.202 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.202 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:32.202 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.202 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.202 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.202 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.202 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.202 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.202 03:21:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.202 03:21:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.202 03:21:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.465 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.465 "name": "raid_bdev1", 00:12:32.465 "uuid": "b67c57a0-376c-4b6b-adae-af1ba7b947b3", 00:12:32.465 "strip_size_kb": 0, 00:12:32.465 "state": "online", 00:12:32.465 "raid_level": "raid1", 00:12:32.465 "superblock": true, 00:12:32.465 "num_base_bdevs": 2, 00:12:32.465 "num_base_bdevs_discovered": 2, 00:12:32.465 "num_base_bdevs_operational": 2, 00:12:32.465 "base_bdevs_list": [ 00:12:32.465 { 00:12:32.465 "name": "spare", 00:12:32.465 "uuid": "06159c3f-552c-55bf-b66b-31cc4c090717", 00:12:32.465 "is_configured": true, 00:12:32.465 "data_offset": 2048, 00:12:32.465 "data_size": 63488 00:12:32.465 }, 00:12:32.465 { 00:12:32.465 "name": "BaseBdev2", 00:12:32.465 "uuid": "beb150e1-0d72-5f78-bc91-b4b4fe22b907", 00:12:32.465 "is_configured": true, 00:12:32.465 "data_offset": 2048, 00:12:32.465 "data_size": 63488 00:12:32.465 } 00:12:32.465 ] 00:12:32.465 }' 00:12:32.465 03:21:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.465 03:21:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.731 03:21:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:32.731 03:21:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.731 03:21:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.731 [2024-11-21 03:21:20.180300] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:32.731 [2024-11-21 03:21:20.180348] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:32.731 [2024-11-21 03:21:20.180441] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:32.731 [2024-11-21 03:21:20.180520] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:32.731 [2024-11-21 03:21:20.180531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:12:32.732 03:21:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.732 03:21:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:32.732 03:21:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.732 03:21:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.732 03:21:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.732 03:21:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.732 03:21:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:32.732 03:21:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:32.732 03:21:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:32.732 03:21:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:32.732 03:21:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:32.732 03:21:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:32.732 03:21:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:32.732 03:21:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:32.732 03:21:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:32.732 03:21:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:32.732 03:21:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:32.732 03:21:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:32.732 03:21:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:32.999 /dev/nbd0 00:12:32.999 03:21:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:32.999 03:21:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:32.999 03:21:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:32.999 03:21:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:32.999 03:21:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:32.999 03:21:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:32.999 03:21:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:32.999 03:21:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:32.999 03:21:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:32.999 03:21:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:32.999 03:21:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:32.999 1+0 records in 00:12:32.999 1+0 records out 00:12:32.999 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421146 s, 9.7 MB/s 00:12:32.999 03:21:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.999 03:21:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:32.999 03:21:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.999 03:21:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:32.999 03:21:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:32.999 03:21:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:32.999 03:21:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:32.999 03:21:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:33.258 /dev/nbd1 00:12:33.258 03:21:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:33.258 03:21:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:33.258 03:21:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:33.258 03:21:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:33.258 03:21:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:33.258 03:21:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:33.258 03:21:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:33.258 03:21:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:33.258 03:21:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:33.258 03:21:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:33.258 03:21:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:33.258 1+0 records in 00:12:33.258 1+0 records out 00:12:33.258 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433776 s, 9.4 MB/s 00:12:33.258 03:21:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.258 03:21:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:33.258 03:21:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.258 03:21:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:33.258 03:21:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:33.258 03:21:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:33.258 03:21:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:33.258 03:21:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:33.517 03:21:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:33.517 03:21:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:33.517 03:21:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:33.517 03:21:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:33.517 03:21:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:33.517 03:21:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.517 03:21:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:33.517 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:33.517 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:33.517 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:33.517 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.517 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.517 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:33.517 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:33.517 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.517 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.517 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:33.776 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:33.776 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:33.776 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:33.776 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.776 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.776 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:33.776 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:33.776 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.776 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:33.776 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:33.776 03:21:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.776 03:21:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.776 03:21:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.776 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:33.776 03:21:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.776 03:21:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.776 [2024-11-21 03:21:21.298233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:33.776 [2024-11-21 03:21:21.298318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.776 [2024-11-21 03:21:21.298347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:33.776 [2024-11-21 03:21:21.298357] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.776 [2024-11-21 03:21:21.300587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.776 [2024-11-21 03:21:21.300649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:33.776 [2024-11-21 03:21:21.300745] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:33.776 [2024-11-21 03:21:21.300784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:33.776 [2024-11-21 03:21:21.300910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:33.776 spare 00:12:33.776 03:21:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.776 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:33.776 03:21:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.776 03:21:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.036 [2024-11-21 03:21:21.400993] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:34.036 [2024-11-21 03:21:21.401061] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:34.036 [2024-11-21 03:21:21.401433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ef0 00:12:34.036 [2024-11-21 03:21:21.401627] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:34.036 [2024-11-21 03:21:21.401649] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:34.036 [2024-11-21 03:21:21.401806] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.036 03:21:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.036 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:34.036 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.036 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.036 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.036 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.036 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:34.036 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.036 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.036 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.036 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.036 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.036 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.036 03:21:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.036 03:21:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.036 03:21:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.036 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.036 "name": "raid_bdev1", 00:12:34.036 "uuid": "b67c57a0-376c-4b6b-adae-af1ba7b947b3", 00:12:34.036 "strip_size_kb": 0, 00:12:34.036 "state": "online", 00:12:34.036 "raid_level": "raid1", 00:12:34.036 "superblock": true, 00:12:34.036 "num_base_bdevs": 2, 00:12:34.036 "num_base_bdevs_discovered": 2, 00:12:34.036 "num_base_bdevs_operational": 2, 00:12:34.036 "base_bdevs_list": [ 00:12:34.036 { 00:12:34.036 "name": "spare", 00:12:34.036 "uuid": "06159c3f-552c-55bf-b66b-31cc4c090717", 00:12:34.036 "is_configured": true, 00:12:34.036 "data_offset": 2048, 00:12:34.036 "data_size": 63488 00:12:34.036 }, 00:12:34.036 { 00:12:34.036 "name": "BaseBdev2", 00:12:34.036 "uuid": "beb150e1-0d72-5f78-bc91-b4b4fe22b907", 00:12:34.036 "is_configured": true, 00:12:34.036 "data_offset": 2048, 00:12:34.036 "data_size": 63488 00:12:34.036 } 00:12:34.036 ] 00:12:34.036 }' 00:12:34.036 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.036 03:21:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.295 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:34.295 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.295 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:34.295 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:34.295 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.295 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.295 03:21:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.295 03:21:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.295 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.554 03:21:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.554 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.554 "name": "raid_bdev1", 00:12:34.554 "uuid": "b67c57a0-376c-4b6b-adae-af1ba7b947b3", 00:12:34.554 "strip_size_kb": 0, 00:12:34.554 "state": "online", 00:12:34.554 "raid_level": "raid1", 00:12:34.554 "superblock": true, 00:12:34.554 "num_base_bdevs": 2, 00:12:34.554 "num_base_bdevs_discovered": 2, 00:12:34.554 "num_base_bdevs_operational": 2, 00:12:34.554 "base_bdevs_list": [ 00:12:34.554 { 00:12:34.554 "name": "spare", 00:12:34.554 "uuid": "06159c3f-552c-55bf-b66b-31cc4c090717", 00:12:34.554 "is_configured": true, 00:12:34.554 "data_offset": 2048, 00:12:34.554 "data_size": 63488 00:12:34.554 }, 00:12:34.554 { 00:12:34.554 "name": "BaseBdev2", 00:12:34.554 "uuid": "beb150e1-0d72-5f78-bc91-b4b4fe22b907", 00:12:34.554 "is_configured": true, 00:12:34.554 "data_offset": 2048, 00:12:34.554 "data_size": 63488 00:12:34.554 } 00:12:34.554 ] 00:12:34.554 }' 00:12:34.554 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.554 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:34.554 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.554 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:34.554 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.554 03:21:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:34.554 03:21:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.554 03:21:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.554 03:21:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.554 03:21:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:34.554 03:21:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:34.554 03:21:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.555 03:21:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.555 [2024-11-21 03:21:22.046481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:34.555 03:21:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.555 03:21:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:34.555 03:21:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.555 03:21:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.555 03:21:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.555 03:21:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.555 03:21:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:34.555 03:21:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.555 03:21:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.555 03:21:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.555 03:21:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.555 03:21:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.555 03:21:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.555 03:21:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.555 03:21:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.555 03:21:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.555 03:21:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.555 "name": "raid_bdev1", 00:12:34.555 "uuid": "b67c57a0-376c-4b6b-adae-af1ba7b947b3", 00:12:34.555 "strip_size_kb": 0, 00:12:34.555 "state": "online", 00:12:34.555 "raid_level": "raid1", 00:12:34.555 "superblock": true, 00:12:34.555 "num_base_bdevs": 2, 00:12:34.555 "num_base_bdevs_discovered": 1, 00:12:34.555 "num_base_bdevs_operational": 1, 00:12:34.555 "base_bdevs_list": [ 00:12:34.555 { 00:12:34.555 "name": null, 00:12:34.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.555 "is_configured": false, 00:12:34.555 "data_offset": 0, 00:12:34.555 "data_size": 63488 00:12:34.555 }, 00:12:34.555 { 00:12:34.555 "name": "BaseBdev2", 00:12:34.555 "uuid": "beb150e1-0d72-5f78-bc91-b4b4fe22b907", 00:12:34.555 "is_configured": true, 00:12:34.555 "data_offset": 2048, 00:12:34.555 "data_size": 63488 00:12:34.555 } 00:12:34.555 ] 00:12:34.555 }' 00:12:34.555 03:21:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.555 03:21:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.122 03:21:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:35.122 03:21:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.122 03:21:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.122 [2024-11-21 03:21:22.498661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:35.122 [2024-11-21 03:21:22.498865] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:35.122 [2024-11-21 03:21:22.498885] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:35.122 [2024-11-21 03:21:22.498921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:35.122 [2024-11-21 03:21:22.503767] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1fc0 00:12:35.122 03:21:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.122 03:21:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:35.122 [2024-11-21 03:21:22.505719] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:36.059 03:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:36.059 03:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.059 03:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:36.059 03:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:36.059 03:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.059 03:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.059 03:21:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.059 03:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.059 03:21:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.059 03:21:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.059 03:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.059 "name": "raid_bdev1", 00:12:36.059 "uuid": "b67c57a0-376c-4b6b-adae-af1ba7b947b3", 00:12:36.059 "strip_size_kb": 0, 00:12:36.059 "state": "online", 00:12:36.059 "raid_level": "raid1", 00:12:36.059 "superblock": true, 00:12:36.059 "num_base_bdevs": 2, 00:12:36.059 "num_base_bdevs_discovered": 2, 00:12:36.059 "num_base_bdevs_operational": 2, 00:12:36.059 "process": { 00:12:36.059 "type": "rebuild", 00:12:36.059 "target": "spare", 00:12:36.059 "progress": { 00:12:36.059 "blocks": 20480, 00:12:36.059 "percent": 32 00:12:36.059 } 00:12:36.059 }, 00:12:36.059 "base_bdevs_list": [ 00:12:36.059 { 00:12:36.059 "name": "spare", 00:12:36.059 "uuid": "06159c3f-552c-55bf-b66b-31cc4c090717", 00:12:36.059 "is_configured": true, 00:12:36.059 "data_offset": 2048, 00:12:36.059 "data_size": 63488 00:12:36.059 }, 00:12:36.059 { 00:12:36.059 "name": "BaseBdev2", 00:12:36.059 "uuid": "beb150e1-0d72-5f78-bc91-b4b4fe22b907", 00:12:36.059 "is_configured": true, 00:12:36.059 "data_offset": 2048, 00:12:36.059 "data_size": 63488 00:12:36.059 } 00:12:36.059 ] 00:12:36.059 }' 00:12:36.059 03:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.059 03:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:36.059 03:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.319 03:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:36.319 03:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:36.319 03:21:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.319 03:21:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.319 [2024-11-21 03:21:23.664297] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:36.319 [2024-11-21 03:21:23.712742] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:36.319 [2024-11-21 03:21:23.712823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.319 [2024-11-21 03:21:23.712839] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:36.319 [2024-11-21 03:21:23.712850] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:36.319 03:21:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.319 03:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:36.319 03:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.319 03:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.319 03:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.319 03:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.319 03:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:36.319 03:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.319 03:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.319 03:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.319 03:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.319 03:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.319 03:21:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.319 03:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.319 03:21:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.319 03:21:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.319 03:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.319 "name": "raid_bdev1", 00:12:36.319 "uuid": "b67c57a0-376c-4b6b-adae-af1ba7b947b3", 00:12:36.319 "strip_size_kb": 0, 00:12:36.319 "state": "online", 00:12:36.319 "raid_level": "raid1", 00:12:36.319 "superblock": true, 00:12:36.319 "num_base_bdevs": 2, 00:12:36.319 "num_base_bdevs_discovered": 1, 00:12:36.319 "num_base_bdevs_operational": 1, 00:12:36.319 "base_bdevs_list": [ 00:12:36.319 { 00:12:36.319 "name": null, 00:12:36.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.319 "is_configured": false, 00:12:36.319 "data_offset": 0, 00:12:36.319 "data_size": 63488 00:12:36.319 }, 00:12:36.319 { 00:12:36.319 "name": "BaseBdev2", 00:12:36.319 "uuid": "beb150e1-0d72-5f78-bc91-b4b4fe22b907", 00:12:36.319 "is_configured": true, 00:12:36.319 "data_offset": 2048, 00:12:36.319 "data_size": 63488 00:12:36.319 } 00:12:36.319 ] 00:12:36.319 }' 00:12:36.319 03:21:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.319 03:21:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.888 03:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:36.888 03:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.888 03:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.888 [2024-11-21 03:21:24.173949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:36.888 [2024-11-21 03:21:24.174149] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.888 [2024-11-21 03:21:24.174194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:36.888 [2024-11-21 03:21:24.174230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.888 [2024-11-21 03:21:24.174749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.888 [2024-11-21 03:21:24.174821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:36.888 [2024-11-21 03:21:24.174951] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:36.888 [2024-11-21 03:21:24.175005] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:36.888 [2024-11-21 03:21:24.175080] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:36.888 [2024-11-21 03:21:24.175135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:36.888 [2024-11-21 03:21:24.180159] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2090 00:12:36.888 spare 00:12:36.888 03:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.888 03:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:36.888 [2024-11-21 03:21:24.182355] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:37.828 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:37.828 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:37.828 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:37.828 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:37.828 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:37.828 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.828 03:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.828 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.828 03:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.828 03:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.828 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:37.828 "name": "raid_bdev1", 00:12:37.828 "uuid": "b67c57a0-376c-4b6b-adae-af1ba7b947b3", 00:12:37.828 "strip_size_kb": 0, 00:12:37.828 "state": "online", 00:12:37.828 "raid_level": "raid1", 00:12:37.828 "superblock": true, 00:12:37.828 "num_base_bdevs": 2, 00:12:37.828 "num_base_bdevs_discovered": 2, 00:12:37.828 "num_base_bdevs_operational": 2, 00:12:37.828 "process": { 00:12:37.828 "type": "rebuild", 00:12:37.828 "target": "spare", 00:12:37.828 "progress": { 00:12:37.828 "blocks": 20480, 00:12:37.828 "percent": 32 00:12:37.828 } 00:12:37.828 }, 00:12:37.828 "base_bdevs_list": [ 00:12:37.828 { 00:12:37.828 "name": "spare", 00:12:37.828 "uuid": "06159c3f-552c-55bf-b66b-31cc4c090717", 00:12:37.828 "is_configured": true, 00:12:37.828 "data_offset": 2048, 00:12:37.828 "data_size": 63488 00:12:37.828 }, 00:12:37.828 { 00:12:37.828 "name": "BaseBdev2", 00:12:37.828 "uuid": "beb150e1-0d72-5f78-bc91-b4b4fe22b907", 00:12:37.828 "is_configured": true, 00:12:37.828 "data_offset": 2048, 00:12:37.828 "data_size": 63488 00:12:37.828 } 00:12:37.828 ] 00:12:37.828 }' 00:12:37.828 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:37.828 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:37.828 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:37.829 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:37.829 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:37.829 03:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.829 03:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.829 [2024-11-21 03:21:25.300688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:37.829 [2024-11-21 03:21:25.389684] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:37.829 [2024-11-21 03:21:25.389838] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.829 [2024-11-21 03:21:25.389880] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:37.829 [2024-11-21 03:21:25.389902] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:38.088 03:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.088 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:38.088 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.088 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.088 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.088 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.088 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:38.088 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.088 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.088 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.088 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.088 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.088 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.088 03:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.088 03:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.088 03:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.088 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.088 "name": "raid_bdev1", 00:12:38.088 "uuid": "b67c57a0-376c-4b6b-adae-af1ba7b947b3", 00:12:38.088 "strip_size_kb": 0, 00:12:38.088 "state": "online", 00:12:38.088 "raid_level": "raid1", 00:12:38.088 "superblock": true, 00:12:38.088 "num_base_bdevs": 2, 00:12:38.088 "num_base_bdevs_discovered": 1, 00:12:38.088 "num_base_bdevs_operational": 1, 00:12:38.088 "base_bdevs_list": [ 00:12:38.088 { 00:12:38.088 "name": null, 00:12:38.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.088 "is_configured": false, 00:12:38.089 "data_offset": 0, 00:12:38.089 "data_size": 63488 00:12:38.089 }, 00:12:38.089 { 00:12:38.089 "name": "BaseBdev2", 00:12:38.089 "uuid": "beb150e1-0d72-5f78-bc91-b4b4fe22b907", 00:12:38.089 "is_configured": true, 00:12:38.089 "data_offset": 2048, 00:12:38.089 "data_size": 63488 00:12:38.089 } 00:12:38.089 ] 00:12:38.089 }' 00:12:38.089 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.089 03:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.349 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:38.349 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.349 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:38.349 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:38.349 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.349 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.349 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.349 03:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.349 03:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.349 03:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.608 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.608 "name": "raid_bdev1", 00:12:38.608 "uuid": "b67c57a0-376c-4b6b-adae-af1ba7b947b3", 00:12:38.608 "strip_size_kb": 0, 00:12:38.608 "state": "online", 00:12:38.608 "raid_level": "raid1", 00:12:38.608 "superblock": true, 00:12:38.608 "num_base_bdevs": 2, 00:12:38.609 "num_base_bdevs_discovered": 1, 00:12:38.609 "num_base_bdevs_operational": 1, 00:12:38.609 "base_bdevs_list": [ 00:12:38.609 { 00:12:38.609 "name": null, 00:12:38.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.609 "is_configured": false, 00:12:38.609 "data_offset": 0, 00:12:38.609 "data_size": 63488 00:12:38.609 }, 00:12:38.609 { 00:12:38.609 "name": "BaseBdev2", 00:12:38.609 "uuid": "beb150e1-0d72-5f78-bc91-b4b4fe22b907", 00:12:38.609 "is_configured": true, 00:12:38.609 "data_offset": 2048, 00:12:38.609 "data_size": 63488 00:12:38.609 } 00:12:38.609 ] 00:12:38.609 }' 00:12:38.609 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.609 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:38.609 03:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.609 03:21:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:38.609 03:21:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:38.609 03:21:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.609 03:21:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.609 03:21:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.609 03:21:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:38.609 03:21:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.609 03:21:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.609 [2024-11-21 03:21:26.030930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:38.609 [2024-11-21 03:21:26.031118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.609 [2024-11-21 03:21:26.031147] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:38.609 [2024-11-21 03:21:26.031156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.609 [2024-11-21 03:21:26.031575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.609 [2024-11-21 03:21:26.031593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:38.609 [2024-11-21 03:21:26.031676] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:38.609 [2024-11-21 03:21:26.031689] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:38.609 [2024-11-21 03:21:26.031699] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:38.609 [2024-11-21 03:21:26.031710] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:38.609 BaseBdev1 00:12:38.609 03:21:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.609 03:21:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:39.549 03:21:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:39.549 03:21:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.549 03:21:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.549 03:21:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.549 03:21:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.549 03:21:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:39.549 03:21:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.549 03:21:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.549 03:21:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.549 03:21:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.549 03:21:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.549 03:21:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.549 03:21:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.549 03:21:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.549 03:21:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.549 03:21:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.549 "name": "raid_bdev1", 00:12:39.549 "uuid": "b67c57a0-376c-4b6b-adae-af1ba7b947b3", 00:12:39.549 "strip_size_kb": 0, 00:12:39.549 "state": "online", 00:12:39.549 "raid_level": "raid1", 00:12:39.549 "superblock": true, 00:12:39.549 "num_base_bdevs": 2, 00:12:39.549 "num_base_bdevs_discovered": 1, 00:12:39.549 "num_base_bdevs_operational": 1, 00:12:39.549 "base_bdevs_list": [ 00:12:39.549 { 00:12:39.549 "name": null, 00:12:39.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.549 "is_configured": false, 00:12:39.549 "data_offset": 0, 00:12:39.549 "data_size": 63488 00:12:39.549 }, 00:12:39.549 { 00:12:39.549 "name": "BaseBdev2", 00:12:39.549 "uuid": "beb150e1-0d72-5f78-bc91-b4b4fe22b907", 00:12:39.549 "is_configured": true, 00:12:39.549 "data_offset": 2048, 00:12:39.549 "data_size": 63488 00:12:39.549 } 00:12:39.549 ] 00:12:39.549 }' 00:12:39.549 03:21:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.549 03:21:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.119 03:21:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:40.119 03:21:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.119 03:21:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:40.119 03:21:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:40.119 03:21:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.119 03:21:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.119 03:21:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.119 03:21:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.119 03:21:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.119 03:21:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.119 03:21:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.119 "name": "raid_bdev1", 00:12:40.119 "uuid": "b67c57a0-376c-4b6b-adae-af1ba7b947b3", 00:12:40.119 "strip_size_kb": 0, 00:12:40.119 "state": "online", 00:12:40.119 "raid_level": "raid1", 00:12:40.119 "superblock": true, 00:12:40.119 "num_base_bdevs": 2, 00:12:40.119 "num_base_bdevs_discovered": 1, 00:12:40.119 "num_base_bdevs_operational": 1, 00:12:40.119 "base_bdevs_list": [ 00:12:40.119 { 00:12:40.119 "name": null, 00:12:40.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.119 "is_configured": false, 00:12:40.119 "data_offset": 0, 00:12:40.119 "data_size": 63488 00:12:40.119 }, 00:12:40.119 { 00:12:40.119 "name": "BaseBdev2", 00:12:40.119 "uuid": "beb150e1-0d72-5f78-bc91-b4b4fe22b907", 00:12:40.119 "is_configured": true, 00:12:40.119 "data_offset": 2048, 00:12:40.119 "data_size": 63488 00:12:40.119 } 00:12:40.119 ] 00:12:40.119 }' 00:12:40.119 03:21:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.119 03:21:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:40.119 03:21:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.119 03:21:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:40.119 03:21:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:40.119 03:21:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:12:40.119 03:21:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:40.119 03:21:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:40.119 03:21:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:40.119 03:21:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:40.119 03:21:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:40.119 03:21:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:40.119 03:21:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.119 03:21:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.119 [2024-11-21 03:21:27.559415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:40.119 [2024-11-21 03:21:27.559586] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:40.119 [2024-11-21 03:21:27.559600] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:40.119 request: 00:12:40.119 { 00:12:40.119 "base_bdev": "BaseBdev1", 00:12:40.119 "raid_bdev": "raid_bdev1", 00:12:40.119 "method": "bdev_raid_add_base_bdev", 00:12:40.119 "req_id": 1 00:12:40.119 } 00:12:40.119 Got JSON-RPC error response 00:12:40.119 response: 00:12:40.119 { 00:12:40.119 "code": -22, 00:12:40.119 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:40.119 } 00:12:40.119 03:21:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:40.119 03:21:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:12:40.119 03:21:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:40.119 03:21:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:40.119 03:21:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:40.119 03:21:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:41.059 03:21:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:41.059 03:21:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.059 03:21:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.059 03:21:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.059 03:21:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.059 03:21:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:41.059 03:21:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.059 03:21:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.059 03:21:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.059 03:21:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.059 03:21:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.059 03:21:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.059 03:21:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.059 03:21:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.059 03:21:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.319 03:21:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.319 "name": "raid_bdev1", 00:12:41.319 "uuid": "b67c57a0-376c-4b6b-adae-af1ba7b947b3", 00:12:41.319 "strip_size_kb": 0, 00:12:41.319 "state": "online", 00:12:41.319 "raid_level": "raid1", 00:12:41.319 "superblock": true, 00:12:41.319 "num_base_bdevs": 2, 00:12:41.319 "num_base_bdevs_discovered": 1, 00:12:41.319 "num_base_bdevs_operational": 1, 00:12:41.319 "base_bdevs_list": [ 00:12:41.319 { 00:12:41.319 "name": null, 00:12:41.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.319 "is_configured": false, 00:12:41.319 "data_offset": 0, 00:12:41.319 "data_size": 63488 00:12:41.319 }, 00:12:41.319 { 00:12:41.319 "name": "BaseBdev2", 00:12:41.319 "uuid": "beb150e1-0d72-5f78-bc91-b4b4fe22b907", 00:12:41.319 "is_configured": true, 00:12:41.319 "data_offset": 2048, 00:12:41.319 "data_size": 63488 00:12:41.319 } 00:12:41.319 ] 00:12:41.319 }' 00:12:41.319 03:21:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.319 03:21:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.579 03:21:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:41.579 03:21:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:41.579 03:21:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:41.579 03:21:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:41.579 03:21:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:41.579 03:21:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.579 03:21:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.579 03:21:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.579 03:21:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.579 03:21:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.579 03:21:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:41.579 "name": "raid_bdev1", 00:12:41.579 "uuid": "b67c57a0-376c-4b6b-adae-af1ba7b947b3", 00:12:41.579 "strip_size_kb": 0, 00:12:41.579 "state": "online", 00:12:41.579 "raid_level": "raid1", 00:12:41.579 "superblock": true, 00:12:41.579 "num_base_bdevs": 2, 00:12:41.579 "num_base_bdevs_discovered": 1, 00:12:41.579 "num_base_bdevs_operational": 1, 00:12:41.579 "base_bdevs_list": [ 00:12:41.579 { 00:12:41.579 "name": null, 00:12:41.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.579 "is_configured": false, 00:12:41.579 "data_offset": 0, 00:12:41.579 "data_size": 63488 00:12:41.579 }, 00:12:41.579 { 00:12:41.579 "name": "BaseBdev2", 00:12:41.579 "uuid": "beb150e1-0d72-5f78-bc91-b4b4fe22b907", 00:12:41.579 "is_configured": true, 00:12:41.579 "data_offset": 2048, 00:12:41.579 "data_size": 63488 00:12:41.579 } 00:12:41.579 ] 00:12:41.579 }' 00:12:41.579 03:21:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:41.579 03:21:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:41.579 03:21:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:41.844 03:21:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:41.844 03:21:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 88485 00:12:41.845 03:21:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 88485 ']' 00:12:41.845 03:21:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 88485 00:12:41.845 03:21:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:41.845 03:21:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:41.845 03:21:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88485 00:12:41.845 killing process with pid 88485 00:12:41.845 Received shutdown signal, test time was about 60.000000 seconds 00:12:41.845 00:12:41.845 Latency(us) 00:12:41.845 [2024-11-21T03:21:29.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:41.845 [2024-11-21T03:21:29.411Z] =================================================================================================================== 00:12:41.845 [2024-11-21T03:21:29.411Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:41.845 03:21:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:41.845 03:21:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:41.845 03:21:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88485' 00:12:41.845 03:21:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 88485 00:12:41.845 [2024-11-21 03:21:29.193281] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:41.845 03:21:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 88485 00:12:41.845 [2024-11-21 03:21:29.193432] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:41.845 [2024-11-21 03:21:29.193485] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:41.845 [2024-11-21 03:21:29.193497] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:41.845 [2024-11-21 03:21:29.225557] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:42.107 03:21:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:42.107 00:12:42.107 real 0m21.681s 00:12:42.107 user 0m26.849s 00:12:42.107 sys 0m3.711s 00:12:42.107 03:21:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:42.107 ************************************ 00:12:42.107 END TEST raid_rebuild_test_sb 00:12:42.107 ************************************ 00:12:42.107 03:21:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.107 03:21:29 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:12:42.107 03:21:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:42.107 03:21:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:42.107 03:21:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:42.107 ************************************ 00:12:42.107 START TEST raid_rebuild_test_io 00:12:42.107 ************************************ 00:12:42.107 03:21:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:12:42.107 03:21:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:42.108 03:21:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:42.108 03:21:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:42.108 03:21:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:42.108 03:21:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:42.108 03:21:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:42.108 03:21:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:42.108 03:21:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:42.108 03:21:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:42.108 03:21:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:42.108 03:21:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:42.108 03:21:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:42.108 03:21:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:42.108 03:21:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:42.108 03:21:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:42.108 03:21:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:42.108 03:21:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:42.108 03:21:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:42.108 03:21:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:42.108 03:21:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:42.108 03:21:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:42.108 03:21:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:42.108 03:21:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:42.108 03:21:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89198 00:12:42.108 03:21:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89198 00:12:42.108 03:21:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:42.108 03:21:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 89198 ']' 00:12:42.108 03:21:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.108 03:21:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:42.108 03:21:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.108 03:21:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:42.108 03:21:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.108 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:42.108 Zero copy mechanism will not be used. 00:12:42.108 [2024-11-21 03:21:29.599424] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:12:42.108 [2024-11-21 03:21:29.599559] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89198 ] 00:12:42.368 [2024-11-21 03:21:29.735585] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:42.368 [2024-11-21 03:21:29.774855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.368 [2024-11-21 03:21:29.804936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.368 [2024-11-21 03:21:29.848003] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:42.368 [2024-11-21 03:21:29.848068] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:42.940 03:21:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:42.940 03:21:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:12:42.940 03:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:42.940 03:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:42.940 03:21:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.940 03:21:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.940 BaseBdev1_malloc 00:12:42.940 03:21:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.940 03:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:42.940 03:21:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.940 03:21:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.940 [2024-11-21 03:21:30.468012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:42.940 [2024-11-21 03:21:30.468105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.940 [2024-11-21 03:21:30.468140] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:42.940 [2024-11-21 03:21:30.468157] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.940 [2024-11-21 03:21:30.470422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.940 [2024-11-21 03:21:30.470467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:42.940 BaseBdev1 00:12:42.940 03:21:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.940 03:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:42.940 03:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:42.940 03:21:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.940 03:21:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.940 BaseBdev2_malloc 00:12:42.940 03:21:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.940 03:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:42.940 03:21:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.940 03:21:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.940 [2024-11-21 03:21:30.488916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:42.940 [2024-11-21 03:21:30.488990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.940 [2024-11-21 03:21:30.489008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:42.940 [2024-11-21 03:21:30.489034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.940 [2024-11-21 03:21:30.491202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.940 [2024-11-21 03:21:30.491247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:42.940 BaseBdev2 00:12:42.940 03:21:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.940 03:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:42.940 03:21:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.940 03:21:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.201 spare_malloc 00:12:43.201 03:21:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.201 03:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:43.201 03:21:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.201 03:21:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.201 spare_delay 00:12:43.201 03:21:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.201 03:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:43.201 03:21:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.201 03:21:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.201 [2024-11-21 03:21:30.529843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:43.201 [2024-11-21 03:21:30.529923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.201 [2024-11-21 03:21:30.529946] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:43.201 [2024-11-21 03:21:30.529958] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.201 [2024-11-21 03:21:30.532219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.201 [2024-11-21 03:21:30.532262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:43.201 spare 00:12:43.201 03:21:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.201 03:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:43.201 03:21:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.201 03:21:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.201 [2024-11-21 03:21:30.541902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:43.201 [2024-11-21 03:21:30.543948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:43.201 [2024-11-21 03:21:30.544069] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:12:43.201 [2024-11-21 03:21:30.544082] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:43.201 [2024-11-21 03:21:30.544367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:43.201 [2024-11-21 03:21:30.544508] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:12:43.201 [2024-11-21 03:21:30.544527] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:12:43.201 [2024-11-21 03:21:30.544661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.201 03:21:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.201 03:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:43.201 03:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.201 03:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.201 03:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.201 03:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.201 03:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:43.201 03:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.201 03:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.201 03:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.201 03:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.201 03:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.201 03:21:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.201 03:21:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.201 03:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.201 03:21:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.201 03:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.201 "name": "raid_bdev1", 00:12:43.201 "uuid": "79c96bb4-bb5c-48ca-82de-37fc08f4c31f", 00:12:43.201 "strip_size_kb": 0, 00:12:43.201 "state": "online", 00:12:43.201 "raid_level": "raid1", 00:12:43.201 "superblock": false, 00:12:43.201 "num_base_bdevs": 2, 00:12:43.201 "num_base_bdevs_discovered": 2, 00:12:43.201 "num_base_bdevs_operational": 2, 00:12:43.201 "base_bdevs_list": [ 00:12:43.201 { 00:12:43.201 "name": "BaseBdev1", 00:12:43.201 "uuid": "c52102a6-36f7-501f-9ff8-18cf91bbca04", 00:12:43.201 "is_configured": true, 00:12:43.201 "data_offset": 0, 00:12:43.201 "data_size": 65536 00:12:43.201 }, 00:12:43.201 { 00:12:43.201 "name": "BaseBdev2", 00:12:43.201 "uuid": "1eef3518-869a-57dc-a728-8e19b555543c", 00:12:43.201 "is_configured": true, 00:12:43.201 "data_offset": 0, 00:12:43.201 "data_size": 65536 00:12:43.201 } 00:12:43.201 ] 00:12:43.201 }' 00:12:43.201 03:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.201 03:21:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.462 03:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:43.462 03:21:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.462 03:21:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.462 03:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:43.462 [2024-11-21 03:21:30.994340] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:43.462 03:21:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.722 03:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:43.722 03:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:43.722 03:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.722 03:21:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.722 03:21:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.722 03:21:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.722 03:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:43.722 03:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:43.722 03:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:43.722 03:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:43.722 03:21:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.722 03:21:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.722 [2024-11-21 03:21:31.070045] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:43.722 03:21:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.722 03:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:43.722 03:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.722 03:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.722 03:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.722 03:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.722 03:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:43.722 03:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.722 03:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.722 03:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.722 03:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.722 03:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.722 03:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.722 03:21:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.722 03:21:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.722 03:21:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.722 03:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.722 "name": "raid_bdev1", 00:12:43.722 "uuid": "79c96bb4-bb5c-48ca-82de-37fc08f4c31f", 00:12:43.722 "strip_size_kb": 0, 00:12:43.722 "state": "online", 00:12:43.722 "raid_level": "raid1", 00:12:43.722 "superblock": false, 00:12:43.722 "num_base_bdevs": 2, 00:12:43.722 "num_base_bdevs_discovered": 1, 00:12:43.722 "num_base_bdevs_operational": 1, 00:12:43.722 "base_bdevs_list": [ 00:12:43.722 { 00:12:43.722 "name": null, 00:12:43.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.722 "is_configured": false, 00:12:43.722 "data_offset": 0, 00:12:43.722 "data_size": 65536 00:12:43.722 }, 00:12:43.722 { 00:12:43.722 "name": "BaseBdev2", 00:12:43.722 "uuid": "1eef3518-869a-57dc-a728-8e19b555543c", 00:12:43.722 "is_configured": true, 00:12:43.722 "data_offset": 0, 00:12:43.722 "data_size": 65536 00:12:43.722 } 00:12:43.722 ] 00:12:43.722 }' 00:12:43.722 03:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.722 03:21:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.722 [2024-11-21 03:21:31.164175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:12:43.722 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:43.722 Zero copy mechanism will not be used. 00:12:43.722 Running I/O for 60 seconds... 00:12:43.983 03:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:43.983 03:21:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.983 03:21:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.983 [2024-11-21 03:21:31.513166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:44.243 03:21:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.243 03:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:44.243 [2024-11-21 03:21:31.596032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:44.243 [2024-11-21 03:21:31.598148] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:44.243 [2024-11-21 03:21:31.718367] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:44.243 [2024-11-21 03:21:31.719051] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:44.535 [2024-11-21 03:21:31.858066] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:44.535 [2024-11-21 03:21:31.858492] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:44.796 164.00 IOPS, 492.00 MiB/s [2024-11-21T03:21:32.362Z] [2024-11-21 03:21:32.215952] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:45.055 [2024-11-21 03:21:32.445321] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:45.055 03:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:45.055 03:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.055 03:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:45.055 03:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:45.055 03:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.055 03:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.055 03:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.055 03:21:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.055 03:21:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.055 03:21:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.055 03:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.055 "name": "raid_bdev1", 00:12:45.055 "uuid": "79c96bb4-bb5c-48ca-82de-37fc08f4c31f", 00:12:45.055 "strip_size_kb": 0, 00:12:45.055 "state": "online", 00:12:45.055 "raid_level": "raid1", 00:12:45.055 "superblock": false, 00:12:45.055 "num_base_bdevs": 2, 00:12:45.055 "num_base_bdevs_discovered": 2, 00:12:45.055 "num_base_bdevs_operational": 2, 00:12:45.055 "process": { 00:12:45.055 "type": "rebuild", 00:12:45.055 "target": "spare", 00:12:45.055 "progress": { 00:12:45.055 "blocks": 10240, 00:12:45.055 "percent": 15 00:12:45.055 } 00:12:45.055 }, 00:12:45.055 "base_bdevs_list": [ 00:12:45.055 { 00:12:45.055 "name": "spare", 00:12:45.055 "uuid": "ef2091d7-8a00-5642-b053-1036dffd8e09", 00:12:45.055 "is_configured": true, 00:12:45.055 "data_offset": 0, 00:12:45.055 "data_size": 65536 00:12:45.055 }, 00:12:45.055 { 00:12:45.056 "name": "BaseBdev2", 00:12:45.056 "uuid": "1eef3518-869a-57dc-a728-8e19b555543c", 00:12:45.056 "is_configured": true, 00:12:45.056 "data_offset": 0, 00:12:45.056 "data_size": 65536 00:12:45.056 } 00:12:45.056 ] 00:12:45.056 }' 00:12:45.315 03:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.315 03:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:45.315 03:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.315 03:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:45.315 03:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:45.315 03:21:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.315 03:21:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.315 [2024-11-21 03:21:32.701838] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:45.315 [2024-11-21 03:21:32.769043] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:45.315 [2024-11-21 03:21:32.769601] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:45.576 [2024-11-21 03:21:32.882538] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:45.576 [2024-11-21 03:21:32.896507] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.576 [2024-11-21 03:21:32.896663] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:45.576 [2024-11-21 03:21:32.896689] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:45.576 [2024-11-21 03:21:32.914915] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006490 00:12:45.576 03:21:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.576 03:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:45.576 03:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.576 03:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.576 03:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.576 03:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.576 03:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:45.576 03:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.576 03:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.576 03:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.576 03:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.576 03:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.576 03:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.576 03:21:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.576 03:21:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.576 03:21:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.576 03:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.576 "name": "raid_bdev1", 00:12:45.576 "uuid": "79c96bb4-bb5c-48ca-82de-37fc08f4c31f", 00:12:45.576 "strip_size_kb": 0, 00:12:45.576 "state": "online", 00:12:45.576 "raid_level": "raid1", 00:12:45.576 "superblock": false, 00:12:45.576 "num_base_bdevs": 2, 00:12:45.576 "num_base_bdevs_discovered": 1, 00:12:45.576 "num_base_bdevs_operational": 1, 00:12:45.576 "base_bdevs_list": [ 00:12:45.576 { 00:12:45.576 "name": null, 00:12:45.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.576 "is_configured": false, 00:12:45.576 "data_offset": 0, 00:12:45.576 "data_size": 65536 00:12:45.576 }, 00:12:45.576 { 00:12:45.576 "name": "BaseBdev2", 00:12:45.576 "uuid": "1eef3518-869a-57dc-a728-8e19b555543c", 00:12:45.576 "is_configured": true, 00:12:45.576 "data_offset": 0, 00:12:45.576 "data_size": 65536 00:12:45.576 } 00:12:45.576 ] 00:12:45.576 }' 00:12:45.576 03:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.576 03:21:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.836 140.50 IOPS, 421.50 MiB/s [2024-11-21T03:21:33.402Z] 03:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:45.836 03:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.836 03:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:45.836 03:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:45.836 03:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.836 03:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.836 03:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.836 03:21:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.836 03:21:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.096 03:21:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.096 03:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.096 "name": "raid_bdev1", 00:12:46.096 "uuid": "79c96bb4-bb5c-48ca-82de-37fc08f4c31f", 00:12:46.096 "strip_size_kb": 0, 00:12:46.096 "state": "online", 00:12:46.096 "raid_level": "raid1", 00:12:46.096 "superblock": false, 00:12:46.096 "num_base_bdevs": 2, 00:12:46.096 "num_base_bdevs_discovered": 1, 00:12:46.096 "num_base_bdevs_operational": 1, 00:12:46.096 "base_bdevs_list": [ 00:12:46.096 { 00:12:46.096 "name": null, 00:12:46.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.096 "is_configured": false, 00:12:46.096 "data_offset": 0, 00:12:46.096 "data_size": 65536 00:12:46.096 }, 00:12:46.096 { 00:12:46.096 "name": "BaseBdev2", 00:12:46.096 "uuid": "1eef3518-869a-57dc-a728-8e19b555543c", 00:12:46.096 "is_configured": true, 00:12:46.096 "data_offset": 0, 00:12:46.096 "data_size": 65536 00:12:46.096 } 00:12:46.096 ] 00:12:46.096 }' 00:12:46.096 03:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.096 03:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:46.096 03:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.096 03:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:46.096 03:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:46.096 03:21:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.096 03:21:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.096 [2024-11-21 03:21:33.546430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:46.096 03:21:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.096 03:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:46.096 [2024-11-21 03:21:33.593082] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:12:46.096 [2024-11-21 03:21:33.595191] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:46.356 [2024-11-21 03:21:33.716139] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:46.356 [2024-11-21 03:21:33.716802] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:46.616 [2024-11-21 03:21:33.931604] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:46.616 [2024-11-21 03:21:33.932038] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:46.877 157.67 IOPS, 473.00 MiB/s [2024-11-21T03:21:34.443Z] [2024-11-21 03:21:34.369089] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:46.877 [2024-11-21 03:21:34.369500] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:47.137 03:21:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:47.137 03:21:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.137 03:21:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:47.137 03:21:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:47.137 03:21:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.137 03:21:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.137 03:21:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.137 03:21:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.137 03:21:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.137 03:21:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.137 03:21:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.137 "name": "raid_bdev1", 00:12:47.137 "uuid": "79c96bb4-bb5c-48ca-82de-37fc08f4c31f", 00:12:47.137 "strip_size_kb": 0, 00:12:47.137 "state": "online", 00:12:47.137 "raid_level": "raid1", 00:12:47.137 "superblock": false, 00:12:47.137 "num_base_bdevs": 2, 00:12:47.137 "num_base_bdevs_discovered": 2, 00:12:47.137 "num_base_bdevs_operational": 2, 00:12:47.137 "process": { 00:12:47.137 "type": "rebuild", 00:12:47.137 "target": "spare", 00:12:47.137 "progress": { 00:12:47.137 "blocks": 12288, 00:12:47.137 "percent": 18 00:12:47.137 } 00:12:47.137 }, 00:12:47.137 "base_bdevs_list": [ 00:12:47.137 { 00:12:47.137 "name": "spare", 00:12:47.137 "uuid": "ef2091d7-8a00-5642-b053-1036dffd8e09", 00:12:47.137 "is_configured": true, 00:12:47.137 "data_offset": 0, 00:12:47.137 "data_size": 65536 00:12:47.137 }, 00:12:47.137 { 00:12:47.137 "name": "BaseBdev2", 00:12:47.137 "uuid": "1eef3518-869a-57dc-a728-8e19b555543c", 00:12:47.137 "is_configured": true, 00:12:47.137 "data_offset": 0, 00:12:47.137 "data_size": 65536 00:12:47.137 } 00:12:47.137 ] 00:12:47.137 }' 00:12:47.137 03:21:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.137 03:21:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:47.137 03:21:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.137 [2024-11-21 03:21:34.693632] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:47.137 [2024-11-21 03:21:34.694298] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:47.398 03:21:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:47.398 03:21:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:47.398 03:21:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:47.398 03:21:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:47.398 03:21:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:47.398 03:21:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=329 00:12:47.398 03:21:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:47.398 03:21:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:47.398 03:21:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.398 03:21:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:47.398 03:21:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:47.398 03:21:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.398 03:21:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.398 03:21:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.398 03:21:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.398 03:21:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.398 03:21:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.398 03:21:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.398 "name": "raid_bdev1", 00:12:47.398 "uuid": "79c96bb4-bb5c-48ca-82de-37fc08f4c31f", 00:12:47.398 "strip_size_kb": 0, 00:12:47.398 "state": "online", 00:12:47.398 "raid_level": "raid1", 00:12:47.398 "superblock": false, 00:12:47.398 "num_base_bdevs": 2, 00:12:47.398 "num_base_bdevs_discovered": 2, 00:12:47.398 "num_base_bdevs_operational": 2, 00:12:47.398 "process": { 00:12:47.398 "type": "rebuild", 00:12:47.398 "target": "spare", 00:12:47.398 "progress": { 00:12:47.398 "blocks": 14336, 00:12:47.398 "percent": 21 00:12:47.398 } 00:12:47.398 }, 00:12:47.398 "base_bdevs_list": [ 00:12:47.398 { 00:12:47.398 "name": "spare", 00:12:47.398 "uuid": "ef2091d7-8a00-5642-b053-1036dffd8e09", 00:12:47.398 "is_configured": true, 00:12:47.398 "data_offset": 0, 00:12:47.398 "data_size": 65536 00:12:47.398 }, 00:12:47.398 { 00:12:47.398 "name": "BaseBdev2", 00:12:47.398 "uuid": "1eef3518-869a-57dc-a728-8e19b555543c", 00:12:47.398 "is_configured": true, 00:12:47.398 "data_offset": 0, 00:12:47.398 "data_size": 65536 00:12:47.398 } 00:12:47.398 ] 00:12:47.398 }' 00:12:47.398 03:21:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.398 03:21:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:47.398 03:21:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.398 03:21:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:47.398 03:21:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:48.228 141.25 IOPS, 423.75 MiB/s [2024-11-21T03:21:35.794Z] [2024-11-21 03:21:35.679892] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:48.228 [2024-11-21 03:21:35.788026] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:48.487 03:21:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:48.487 03:21:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:48.487 03:21:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.487 03:21:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:48.487 03:21:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:48.487 03:21:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.487 03:21:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.487 03:21:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.487 03:21:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.487 03:21:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.487 03:21:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.487 03:21:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.487 "name": "raid_bdev1", 00:12:48.488 "uuid": "79c96bb4-bb5c-48ca-82de-37fc08f4c31f", 00:12:48.488 "strip_size_kb": 0, 00:12:48.488 "state": "online", 00:12:48.488 "raid_level": "raid1", 00:12:48.488 "superblock": false, 00:12:48.488 "num_base_bdevs": 2, 00:12:48.488 "num_base_bdevs_discovered": 2, 00:12:48.488 "num_base_bdevs_operational": 2, 00:12:48.488 "process": { 00:12:48.488 "type": "rebuild", 00:12:48.488 "target": "spare", 00:12:48.488 "progress": { 00:12:48.488 "blocks": 34816, 00:12:48.488 "percent": 53 00:12:48.488 } 00:12:48.488 }, 00:12:48.488 "base_bdevs_list": [ 00:12:48.488 { 00:12:48.488 "name": "spare", 00:12:48.488 "uuid": "ef2091d7-8a00-5642-b053-1036dffd8e09", 00:12:48.488 "is_configured": true, 00:12:48.488 "data_offset": 0, 00:12:48.488 "data_size": 65536 00:12:48.488 }, 00:12:48.488 { 00:12:48.488 "name": "BaseBdev2", 00:12:48.488 "uuid": "1eef3518-869a-57dc-a728-8e19b555543c", 00:12:48.488 "is_configured": true, 00:12:48.488 "data_offset": 0, 00:12:48.488 "data_size": 65536 00:12:48.488 } 00:12:48.488 ] 00:12:48.488 }' 00:12:48.488 03:21:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.488 03:21:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:48.488 03:21:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.488 03:21:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:48.488 03:21:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:48.748 [2024-11-21 03:21:36.105493] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:49.008 123.80 IOPS, 371.40 MiB/s [2024-11-21T03:21:36.574Z] [2024-11-21 03:21:36.532763] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:12:49.268 [2024-11-21 03:21:36.653554] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:49.268 [2024-11-21 03:21:36.653861] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:49.529 03:21:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:49.529 03:21:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:49.529 03:21:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.529 03:21:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:49.529 03:21:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:49.529 03:21:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.529 03:21:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.529 03:21:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.529 03:21:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.529 03:21:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.529 03:21:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.529 03:21:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.529 "name": "raid_bdev1", 00:12:49.529 "uuid": "79c96bb4-bb5c-48ca-82de-37fc08f4c31f", 00:12:49.529 "strip_size_kb": 0, 00:12:49.529 "state": "online", 00:12:49.529 "raid_level": "raid1", 00:12:49.529 "superblock": false, 00:12:49.529 "num_base_bdevs": 2, 00:12:49.529 "num_base_bdevs_discovered": 2, 00:12:49.529 "num_base_bdevs_operational": 2, 00:12:49.529 "process": { 00:12:49.529 "type": "rebuild", 00:12:49.529 "target": "spare", 00:12:49.529 "progress": { 00:12:49.529 "blocks": 51200, 00:12:49.529 "percent": 78 00:12:49.529 } 00:12:49.529 }, 00:12:49.529 "base_bdevs_list": [ 00:12:49.529 { 00:12:49.529 "name": "spare", 00:12:49.529 "uuid": "ef2091d7-8a00-5642-b053-1036dffd8e09", 00:12:49.529 "is_configured": true, 00:12:49.529 "data_offset": 0, 00:12:49.529 "data_size": 65536 00:12:49.529 }, 00:12:49.529 { 00:12:49.529 "name": "BaseBdev2", 00:12:49.529 "uuid": "1eef3518-869a-57dc-a728-8e19b555543c", 00:12:49.529 "is_configured": true, 00:12:49.529 "data_offset": 0, 00:12:49.529 "data_size": 65536 00:12:49.529 } 00:12:49.529 ] 00:12:49.529 }' 00:12:49.529 03:21:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.790 03:21:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:49.790 03:21:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.790 03:21:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:49.790 03:21:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:49.790 109.00 IOPS, 327.00 MiB/s [2024-11-21T03:21:37.356Z] [2024-11-21 03:21:37.308737] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:50.360 [2024-11-21 03:21:37.746101] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:50.360 [2024-11-21 03:21:37.846007] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:50.360 [2024-11-21 03:21:37.847813] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.620 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:50.620 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:50.620 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.620 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:50.620 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:50.620 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.620 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.620 03:21:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.620 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.620 03:21:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.947 99.29 IOPS, 297.86 MiB/s [2024-11-21T03:21:38.513Z] 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.947 "name": "raid_bdev1", 00:12:50.947 "uuid": "79c96bb4-bb5c-48ca-82de-37fc08f4c31f", 00:12:50.947 "strip_size_kb": 0, 00:12:50.947 "state": "online", 00:12:50.947 "raid_level": "raid1", 00:12:50.947 "superblock": false, 00:12:50.947 "num_base_bdevs": 2, 00:12:50.947 "num_base_bdevs_discovered": 2, 00:12:50.947 "num_base_bdevs_operational": 2, 00:12:50.947 "base_bdevs_list": [ 00:12:50.947 { 00:12:50.947 "name": "spare", 00:12:50.947 "uuid": "ef2091d7-8a00-5642-b053-1036dffd8e09", 00:12:50.947 "is_configured": true, 00:12:50.947 "data_offset": 0, 00:12:50.947 "data_size": 65536 00:12:50.947 }, 00:12:50.947 { 00:12:50.947 "name": "BaseBdev2", 00:12:50.947 "uuid": "1eef3518-869a-57dc-a728-8e19b555543c", 00:12:50.947 "is_configured": true, 00:12:50.947 "data_offset": 0, 00:12:50.947 "data_size": 65536 00:12:50.947 } 00:12:50.947 ] 00:12:50.947 }' 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.947 "name": "raid_bdev1", 00:12:50.947 "uuid": "79c96bb4-bb5c-48ca-82de-37fc08f4c31f", 00:12:50.947 "strip_size_kb": 0, 00:12:50.947 "state": "online", 00:12:50.947 "raid_level": "raid1", 00:12:50.947 "superblock": false, 00:12:50.947 "num_base_bdevs": 2, 00:12:50.947 "num_base_bdevs_discovered": 2, 00:12:50.947 "num_base_bdevs_operational": 2, 00:12:50.947 "base_bdevs_list": [ 00:12:50.947 { 00:12:50.947 "name": "spare", 00:12:50.947 "uuid": "ef2091d7-8a00-5642-b053-1036dffd8e09", 00:12:50.947 "is_configured": true, 00:12:50.947 "data_offset": 0, 00:12:50.947 "data_size": 65536 00:12:50.947 }, 00:12:50.947 { 00:12:50.947 "name": "BaseBdev2", 00:12:50.947 "uuid": "1eef3518-869a-57dc-a728-8e19b555543c", 00:12:50.947 "is_configured": true, 00:12:50.947 "data_offset": 0, 00:12:50.947 "data_size": 65536 00:12:50.947 } 00:12:50.947 ] 00:12:50.947 }' 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.947 03:21:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.206 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.206 "name": "raid_bdev1", 00:12:51.207 "uuid": "79c96bb4-bb5c-48ca-82de-37fc08f4c31f", 00:12:51.207 "strip_size_kb": 0, 00:12:51.207 "state": "online", 00:12:51.207 "raid_level": "raid1", 00:12:51.207 "superblock": false, 00:12:51.207 "num_base_bdevs": 2, 00:12:51.207 "num_base_bdevs_discovered": 2, 00:12:51.207 "num_base_bdevs_operational": 2, 00:12:51.207 "base_bdevs_list": [ 00:12:51.207 { 00:12:51.207 "name": "spare", 00:12:51.207 "uuid": "ef2091d7-8a00-5642-b053-1036dffd8e09", 00:12:51.207 "is_configured": true, 00:12:51.207 "data_offset": 0, 00:12:51.207 "data_size": 65536 00:12:51.207 }, 00:12:51.207 { 00:12:51.207 "name": "BaseBdev2", 00:12:51.207 "uuid": "1eef3518-869a-57dc-a728-8e19b555543c", 00:12:51.207 "is_configured": true, 00:12:51.207 "data_offset": 0, 00:12:51.207 "data_size": 65536 00:12:51.207 } 00:12:51.207 ] 00:12:51.207 }' 00:12:51.207 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.207 03:21:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.466 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:51.466 03:21:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.466 03:21:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.466 [2024-11-21 03:21:38.882955] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:51.466 [2024-11-21 03:21:38.883097] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:51.466 00:12:51.466 Latency(us) 00:12:51.466 [2024-11-21T03:21:39.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.466 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:51.466 raid_bdev1 : 7.77 92.00 276.00 0.00 0.00 15113.02 315.96 112872.95 00:12:51.466 [2024-11-21T03:21:39.032Z] =================================================================================================================== 00:12:51.466 [2024-11-21T03:21:39.032Z] Total : 92.00 276.00 0.00 0.00 15113.02 315.96 112872.95 00:12:51.466 [2024-11-21 03:21:38.942889] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.466 [2024-11-21 03:21:38.943002] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:51.466 [2024-11-21 03:21:38.943114] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:51.466 [2024-11-21 03:21:38.943170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:12:51.466 { 00:12:51.466 "results": [ 00:12:51.466 { 00:12:51.466 "job": "raid_bdev1", 00:12:51.466 "core_mask": "0x1", 00:12:51.466 "workload": "randrw", 00:12:51.466 "percentage": 50, 00:12:51.466 "status": "finished", 00:12:51.466 "queue_depth": 2, 00:12:51.466 "io_size": 3145728, 00:12:51.466 "runtime": 7.771871, 00:12:51.466 "iops": 91.99843898592758, 00:12:51.466 "mibps": 275.99531695778273, 00:12:51.466 "io_failed": 0, 00:12:51.466 "io_timeout": 0, 00:12:51.466 "avg_latency_us": 15113.015648171233, 00:12:51.466 "min_latency_us": 315.95572213021876, 00:12:51.466 "max_latency_us": 112872.9504052994 00:12:51.466 } 00:12:51.466 ], 00:12:51.466 "core_count": 1 00:12:51.466 } 00:12:51.466 03:21:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.466 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.466 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:51.466 03:21:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.466 03:21:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.466 03:21:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.466 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:51.466 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:51.466 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:51.466 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:51.466 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:51.466 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:51.466 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:51.466 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:51.466 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:51.466 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:51.466 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:51.466 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:51.466 03:21:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:51.726 /dev/nbd0 00:12:51.726 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:51.726 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:51.726 03:21:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:51.726 03:21:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:51.726 03:21:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:51.726 03:21:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:51.726 03:21:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:51.726 03:21:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:51.726 03:21:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:51.726 03:21:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:51.726 03:21:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.726 1+0 records in 00:12:51.726 1+0 records out 00:12:51.726 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000544799 s, 7.5 MB/s 00:12:51.726 03:21:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.726 03:21:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:51.726 03:21:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.726 03:21:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:51.726 03:21:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:51.726 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:51.726 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:51.726 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:51.726 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:51.726 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:51.726 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:51.726 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:51.726 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:51.726 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:51.726 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:51.726 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:51.726 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:51.726 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:51.726 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:51.985 /dev/nbd1 00:12:51.985 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:51.985 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:51.985 03:21:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:51.985 03:21:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:51.985 03:21:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:51.985 03:21:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:51.985 03:21:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:51.985 03:21:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:51.985 03:21:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:51.985 03:21:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:51.985 03:21:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.985 1+0 records in 00:12:51.985 1+0 records out 00:12:51.985 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000515335 s, 7.9 MB/s 00:12:51.985 03:21:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.985 03:21:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:51.985 03:21:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.985 03:21:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:51.985 03:21:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:51.985 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:51.985 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:51.985 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:52.244 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:52.244 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:52.244 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:52.244 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:52.244 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:52.244 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.244 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:52.244 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:52.244 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:52.244 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:52.244 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.244 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.244 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:52.244 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:52.244 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.244 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:52.244 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:52.244 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:52.244 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:52.244 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:52.244 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.244 03:21:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:52.503 03:21:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:52.503 03:21:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:52.503 03:21:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:52.503 03:21:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.503 03:21:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.503 03:21:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:52.503 03:21:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:52.503 03:21:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.503 03:21:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:52.503 03:21:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 89198 00:12:52.503 03:21:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 89198 ']' 00:12:52.503 03:21:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 89198 00:12:52.503 03:21:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:12:52.503 03:21:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:52.503 03:21:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89198 00:12:52.762 03:21:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:52.762 killing process with pid 89198 00:12:52.762 Received shutdown signal, test time was about 8.911615 seconds 00:12:52.762 00:12:52.762 Latency(us) 00:12:52.762 [2024-11-21T03:21:40.328Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:52.762 [2024-11-21T03:21:40.328Z] =================================================================================================================== 00:12:52.762 [2024-11-21T03:21:40.328Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:52.762 03:21:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:52.762 03:21:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89198' 00:12:52.762 03:21:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 89198 00:12:52.762 [2024-11-21 03:21:40.078691] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:52.762 03:21:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 89198 00:12:52.762 [2024-11-21 03:21:40.104727] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:52.762 03:21:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:52.762 00:12:52.762 real 0m10.808s 00:12:52.762 user 0m13.966s 00:12:52.762 sys 0m1.447s 00:12:52.762 ************************************ 00:12:52.762 END TEST raid_rebuild_test_io 00:12:52.762 ************************************ 00:12:52.762 03:21:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:52.762 03:21:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.022 03:21:40 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:12:53.022 03:21:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:53.022 03:21:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:53.022 03:21:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:53.022 ************************************ 00:12:53.022 START TEST raid_rebuild_test_sb_io 00:12:53.022 ************************************ 00:12:53.022 03:21:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:12:53.022 03:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:53.022 03:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:53.022 03:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:53.022 03:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:53.022 03:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:53.022 03:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:53.022 03:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:53.022 03:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:53.022 03:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:53.022 03:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:53.022 03:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:53.022 03:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:53.022 03:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:53.022 03:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:53.022 03:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:53.022 03:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:53.022 03:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:53.022 03:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:53.022 03:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:53.022 03:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:53.022 03:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:53.022 03:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:53.022 03:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:53.022 03:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:53.022 03:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89563 00:12:53.022 03:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:53.022 03:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89563 00:12:53.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.022 03:21:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 89563 ']' 00:12:53.022 03:21:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.022 03:21:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:53.022 03:21:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.022 03:21:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:53.022 03:21:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.022 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:53.022 Zero copy mechanism will not be used. 00:12:53.022 [2024-11-21 03:21:40.495111] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:12:53.022 [2024-11-21 03:21:40.495272] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89563 ] 00:12:53.283 [2024-11-21 03:21:40.637549] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:53.283 [2024-11-21 03:21:40.677717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.283 [2024-11-21 03:21:40.706977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.283 [2024-11-21 03:21:40.749132] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.283 [2024-11-21 03:21:40.749281] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.852 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:53.852 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:12:53.852 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:53.852 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:53.852 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.852 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.852 BaseBdev1_malloc 00:12:53.852 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.852 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:53.852 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.852 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.852 [2024-11-21 03:21:41.372706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:53.852 [2024-11-21 03:21:41.372871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.852 [2024-11-21 03:21:41.372918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:53.852 [2024-11-21 03:21:41.372954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.852 [2024-11-21 03:21:41.375170] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.852 [2024-11-21 03:21:41.375247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:53.852 BaseBdev1 00:12:53.852 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.852 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:53.852 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:53.852 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.852 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.852 BaseBdev2_malloc 00:12:53.852 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.852 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:53.852 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.852 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.852 [2024-11-21 03:21:41.401391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:53.852 [2024-11-21 03:21:41.401458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.852 [2024-11-21 03:21:41.401476] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:53.852 [2024-11-21 03:21:41.401487] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.852 [2024-11-21 03:21:41.403566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.852 [2024-11-21 03:21:41.403669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:53.852 BaseBdev2 00:12:53.852 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.852 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:53.852 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.852 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.112 spare_malloc 00:12:54.112 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.112 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:54.112 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.112 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.112 spare_delay 00:12:54.112 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.112 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:54.112 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.112 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.112 [2024-11-21 03:21:41.442252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:54.112 [2024-11-21 03:21:41.442394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.112 [2024-11-21 03:21:41.442419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:54.112 [2024-11-21 03:21:41.442432] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.112 [2024-11-21 03:21:41.444623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.112 [2024-11-21 03:21:41.444662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:54.112 spare 00:12:54.112 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.112 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:54.112 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.112 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.112 [2024-11-21 03:21:41.454318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:54.112 [2024-11-21 03:21:41.456275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:54.112 [2024-11-21 03:21:41.456489] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:12:54.112 [2024-11-21 03:21:41.456527] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:54.112 [2024-11-21 03:21:41.456832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:54.112 [2024-11-21 03:21:41.457002] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:12:54.112 [2024-11-21 03:21:41.457067] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:12:54.112 [2024-11-21 03:21:41.457236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.112 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.112 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:54.112 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.112 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.112 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.112 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.112 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:54.112 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.112 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.112 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.112 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.112 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.112 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.112 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.112 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.112 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.112 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.112 "name": "raid_bdev1", 00:12:54.112 "uuid": "bb4f1999-46c0-4e6d-bc6c-32cab9c73205", 00:12:54.112 "strip_size_kb": 0, 00:12:54.112 "state": "online", 00:12:54.112 "raid_level": "raid1", 00:12:54.112 "superblock": true, 00:12:54.112 "num_base_bdevs": 2, 00:12:54.112 "num_base_bdevs_discovered": 2, 00:12:54.112 "num_base_bdevs_operational": 2, 00:12:54.112 "base_bdevs_list": [ 00:12:54.112 { 00:12:54.112 "name": "BaseBdev1", 00:12:54.112 "uuid": "737eebda-4b48-5530-93af-fd070fab404e", 00:12:54.112 "is_configured": true, 00:12:54.112 "data_offset": 2048, 00:12:54.112 "data_size": 63488 00:12:54.112 }, 00:12:54.112 { 00:12:54.112 "name": "BaseBdev2", 00:12:54.112 "uuid": "a3d3bb7e-268d-5a02-a57f-c9af3b7611b5", 00:12:54.112 "is_configured": true, 00:12:54.112 "data_offset": 2048, 00:12:54.112 "data_size": 63488 00:12:54.112 } 00:12:54.112 ] 00:12:54.112 }' 00:12:54.112 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.112 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.372 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:54.372 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.372 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.372 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:54.372 [2024-11-21 03:21:41.930751] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:54.630 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.630 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:54.630 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.630 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.630 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.630 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:54.630 03:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.630 03:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:54.630 03:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:54.630 03:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:54.630 03:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:54.630 03:21:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.630 03:21:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.630 [2024-11-21 03:21:42.034438] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:54.630 03:21:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.630 03:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:54.630 03:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.630 03:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.630 03:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.630 03:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.630 03:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:54.630 03:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.630 03:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.630 03:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.630 03:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.630 03:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.630 03:21:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.630 03:21:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.630 03:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.630 03:21:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.630 03:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.630 "name": "raid_bdev1", 00:12:54.630 "uuid": "bb4f1999-46c0-4e6d-bc6c-32cab9c73205", 00:12:54.630 "strip_size_kb": 0, 00:12:54.630 "state": "online", 00:12:54.630 "raid_level": "raid1", 00:12:54.630 "superblock": true, 00:12:54.630 "num_base_bdevs": 2, 00:12:54.630 "num_base_bdevs_discovered": 1, 00:12:54.630 "num_base_bdevs_operational": 1, 00:12:54.630 "base_bdevs_list": [ 00:12:54.630 { 00:12:54.630 "name": null, 00:12:54.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.630 "is_configured": false, 00:12:54.630 "data_offset": 0, 00:12:54.630 "data_size": 63488 00:12:54.630 }, 00:12:54.630 { 00:12:54.630 "name": "BaseBdev2", 00:12:54.630 "uuid": "a3d3bb7e-268d-5a02-a57f-c9af3b7611b5", 00:12:54.630 "is_configured": true, 00:12:54.630 "data_offset": 2048, 00:12:54.630 "data_size": 63488 00:12:54.630 } 00:12:54.630 ] 00:12:54.630 }' 00:12:54.630 03:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.630 03:21:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.630 [2024-11-21 03:21:42.124497] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:12:54.630 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:54.630 Zero copy mechanism will not be used. 00:12:54.630 Running I/O for 60 seconds... 00:12:55.198 03:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:55.198 03:21:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.198 03:21:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.198 [2024-11-21 03:21:42.515664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:55.198 03:21:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.198 03:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:55.198 [2024-11-21 03:21:42.569013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:55.198 [2024-11-21 03:21:42.571116] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:55.198 [2024-11-21 03:21:42.690138] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:55.198 [2024-11-21 03:21:42.690565] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:55.457 [2024-11-21 03:21:42.903978] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:55.457 [2024-11-21 03:21:42.904283] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:55.716 183.00 IOPS, 549.00 MiB/s [2024-11-21T03:21:43.282Z] [2024-11-21 03:21:43.248064] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:55.975 [2024-11-21 03:21:43.369852] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:56.235 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:56.235 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.235 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:56.235 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:56.235 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.235 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.235 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.235 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.235 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.235 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.235 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.235 "name": "raid_bdev1", 00:12:56.235 "uuid": "bb4f1999-46c0-4e6d-bc6c-32cab9c73205", 00:12:56.235 "strip_size_kb": 0, 00:12:56.235 "state": "online", 00:12:56.235 "raid_level": "raid1", 00:12:56.235 "superblock": true, 00:12:56.235 "num_base_bdevs": 2, 00:12:56.235 "num_base_bdevs_discovered": 2, 00:12:56.235 "num_base_bdevs_operational": 2, 00:12:56.235 "process": { 00:12:56.235 "type": "rebuild", 00:12:56.235 "target": "spare", 00:12:56.235 "progress": { 00:12:56.235 "blocks": 10240, 00:12:56.235 "percent": 16 00:12:56.235 } 00:12:56.235 }, 00:12:56.235 "base_bdevs_list": [ 00:12:56.235 { 00:12:56.235 "name": "spare", 00:12:56.235 "uuid": "546eaf23-e41a-5fb0-9be6-2d983b02d93e", 00:12:56.235 "is_configured": true, 00:12:56.235 "data_offset": 2048, 00:12:56.235 "data_size": 63488 00:12:56.235 }, 00:12:56.235 { 00:12:56.235 "name": "BaseBdev2", 00:12:56.235 "uuid": "a3d3bb7e-268d-5a02-a57f-c9af3b7611b5", 00:12:56.235 "is_configured": true, 00:12:56.235 "data_offset": 2048, 00:12:56.235 "data_size": 63488 00:12:56.235 } 00:12:56.235 ] 00:12:56.235 }' 00:12:56.235 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.235 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:56.235 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.235 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:56.235 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:56.235 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.235 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.235 [2024-11-21 03:21:43.706214] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:56.235 [2024-11-21 03:21:43.720045] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:56.495 [2024-11-21 03:21:43.820269] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:56.495 [2024-11-21 03:21:43.828160] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.495 [2024-11-21 03:21:43.828283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:56.495 [2024-11-21 03:21:43.828313] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:56.495 [2024-11-21 03:21:43.845909] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006490 00:12:56.495 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.495 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:56.495 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.495 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.495 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.495 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.495 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:56.495 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.496 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.496 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.496 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.496 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.496 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.496 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.496 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.496 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.496 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.496 "name": "raid_bdev1", 00:12:56.496 "uuid": "bb4f1999-46c0-4e6d-bc6c-32cab9c73205", 00:12:56.496 "strip_size_kb": 0, 00:12:56.496 "state": "online", 00:12:56.496 "raid_level": "raid1", 00:12:56.496 "superblock": true, 00:12:56.496 "num_base_bdevs": 2, 00:12:56.496 "num_base_bdevs_discovered": 1, 00:12:56.496 "num_base_bdevs_operational": 1, 00:12:56.496 "base_bdevs_list": [ 00:12:56.496 { 00:12:56.496 "name": null, 00:12:56.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.496 "is_configured": false, 00:12:56.496 "data_offset": 0, 00:12:56.496 "data_size": 63488 00:12:56.496 }, 00:12:56.496 { 00:12:56.496 "name": "BaseBdev2", 00:12:56.496 "uuid": "a3d3bb7e-268d-5a02-a57f-c9af3b7611b5", 00:12:56.496 "is_configured": true, 00:12:56.496 "data_offset": 2048, 00:12:56.496 "data_size": 63488 00:12:56.496 } 00:12:56.496 ] 00:12:56.496 }' 00:12:56.496 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.496 03:21:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.755 178.00 IOPS, 534.00 MiB/s [2024-11-21T03:21:44.321Z] 03:21:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:56.755 03:21:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.755 03:21:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:56.755 03:21:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:56.755 03:21:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.756 03:21:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.756 03:21:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.756 03:21:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.756 03:21:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.756 03:21:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.756 03:21:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.756 "name": "raid_bdev1", 00:12:56.756 "uuid": "bb4f1999-46c0-4e6d-bc6c-32cab9c73205", 00:12:56.756 "strip_size_kb": 0, 00:12:56.756 "state": "online", 00:12:56.756 "raid_level": "raid1", 00:12:56.756 "superblock": true, 00:12:56.756 "num_base_bdevs": 2, 00:12:56.756 "num_base_bdevs_discovered": 1, 00:12:56.756 "num_base_bdevs_operational": 1, 00:12:56.756 "base_bdevs_list": [ 00:12:56.756 { 00:12:56.756 "name": null, 00:12:56.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.756 "is_configured": false, 00:12:56.756 "data_offset": 0, 00:12:56.756 "data_size": 63488 00:12:56.756 }, 00:12:56.756 { 00:12:56.756 "name": "BaseBdev2", 00:12:56.756 "uuid": "a3d3bb7e-268d-5a02-a57f-c9af3b7611b5", 00:12:56.756 "is_configured": true, 00:12:56.756 "data_offset": 2048, 00:12:56.756 "data_size": 63488 00:12:56.756 } 00:12:56.756 ] 00:12:56.756 }' 00:12:56.756 03:21:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.015 03:21:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:57.015 03:21:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.015 03:21:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:57.015 03:21:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:57.015 03:21:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.015 03:21:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.015 [2024-11-21 03:21:44.420780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:57.015 03:21:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.015 03:21:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:57.015 [2024-11-21 03:21:44.470383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:12:57.015 [2024-11-21 03:21:44.472418] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:57.275 [2024-11-21 03:21:44.586686] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:57.275 [2024-11-21 03:21:44.587230] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:57.275 [2024-11-21 03:21:44.690480] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:57.275 [2024-11-21 03:21:44.690746] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:57.844 182.67 IOPS, 548.00 MiB/s [2024-11-21T03:21:45.410Z] [2024-11-21 03:21:45.175944] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:58.104 [2024-11-21 03:21:45.416826] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:58.104 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:58.104 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:58.104 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:58.104 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:58.104 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.104 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.104 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.104 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.104 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.104 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.104 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.104 "name": "raid_bdev1", 00:12:58.104 "uuid": "bb4f1999-46c0-4e6d-bc6c-32cab9c73205", 00:12:58.104 "strip_size_kb": 0, 00:12:58.104 "state": "online", 00:12:58.104 "raid_level": "raid1", 00:12:58.104 "superblock": true, 00:12:58.104 "num_base_bdevs": 2, 00:12:58.104 "num_base_bdevs_discovered": 2, 00:12:58.104 "num_base_bdevs_operational": 2, 00:12:58.104 "process": { 00:12:58.104 "type": "rebuild", 00:12:58.104 "target": "spare", 00:12:58.104 "progress": { 00:12:58.104 "blocks": 14336, 00:12:58.104 "percent": 22 00:12:58.104 } 00:12:58.104 }, 00:12:58.104 "base_bdevs_list": [ 00:12:58.104 { 00:12:58.104 "name": "spare", 00:12:58.104 "uuid": "546eaf23-e41a-5fb0-9be6-2d983b02d93e", 00:12:58.104 "is_configured": true, 00:12:58.104 "data_offset": 2048, 00:12:58.104 "data_size": 63488 00:12:58.104 }, 00:12:58.104 { 00:12:58.104 "name": "BaseBdev2", 00:12:58.104 "uuid": "a3d3bb7e-268d-5a02-a57f-c9af3b7611b5", 00:12:58.104 "is_configured": true, 00:12:58.104 "data_offset": 2048, 00:12:58.104 "data_size": 63488 00:12:58.104 } 00:12:58.104 ] 00:12:58.104 }' 00:12:58.104 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.104 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:58.104 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.104 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:58.104 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:58.104 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:58.104 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:58.104 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:58.105 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:58.105 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:58.105 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=340 00:12:58.105 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:58.105 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:58.105 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:58.105 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:58.105 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:58.105 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.105 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.105 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.105 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.105 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.105 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.105 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.105 "name": "raid_bdev1", 00:12:58.105 "uuid": "bb4f1999-46c0-4e6d-bc6c-32cab9c73205", 00:12:58.105 "strip_size_kb": 0, 00:12:58.105 "state": "online", 00:12:58.105 "raid_level": "raid1", 00:12:58.105 "superblock": true, 00:12:58.105 "num_base_bdevs": 2, 00:12:58.105 "num_base_bdevs_discovered": 2, 00:12:58.105 "num_base_bdevs_operational": 2, 00:12:58.105 "process": { 00:12:58.105 "type": "rebuild", 00:12:58.105 "target": "spare", 00:12:58.105 "progress": { 00:12:58.105 "blocks": 14336, 00:12:58.105 "percent": 22 00:12:58.105 } 00:12:58.105 }, 00:12:58.105 "base_bdevs_list": [ 00:12:58.105 { 00:12:58.105 "name": "spare", 00:12:58.105 "uuid": "546eaf23-e41a-5fb0-9be6-2d983b02d93e", 00:12:58.105 "is_configured": true, 00:12:58.105 "data_offset": 2048, 00:12:58.105 "data_size": 63488 00:12:58.105 }, 00:12:58.105 { 00:12:58.105 "name": "BaseBdev2", 00:12:58.105 "uuid": "a3d3bb7e-268d-5a02-a57f-c9af3b7611b5", 00:12:58.105 "is_configured": true, 00:12:58.105 "data_offset": 2048, 00:12:58.105 "data_size": 63488 00:12:58.105 } 00:12:58.105 ] 00:12:58.105 }' 00:12:58.105 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.105 [2024-11-21 03:21:45.626886] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:58.105 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:58.105 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.364 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:58.364 03:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:58.623 [2024-11-21 03:21:45.961894] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:59.192 155.75 IOPS, 467.25 MiB/s [2024-11-21T03:21:46.759Z] [2024-11-21 03:21:46.607750] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:59.193 03:21:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:59.193 03:21:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:59.193 03:21:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.193 03:21:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:59.193 03:21:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:59.193 03:21:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.193 03:21:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.193 03:21:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.193 03:21:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.193 03:21:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.193 03:21:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.452 03:21:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.452 "name": "raid_bdev1", 00:12:59.452 "uuid": "bb4f1999-46c0-4e6d-bc6c-32cab9c73205", 00:12:59.452 "strip_size_kb": 0, 00:12:59.452 "state": "online", 00:12:59.452 "raid_level": "raid1", 00:12:59.452 "superblock": true, 00:12:59.452 "num_base_bdevs": 2, 00:12:59.452 "num_base_bdevs_discovered": 2, 00:12:59.452 "num_base_bdevs_operational": 2, 00:12:59.452 "process": { 00:12:59.452 "type": "rebuild", 00:12:59.452 "target": "spare", 00:12:59.452 "progress": { 00:12:59.452 "blocks": 32768, 00:12:59.452 "percent": 51 00:12:59.452 } 00:12:59.452 }, 00:12:59.452 "base_bdevs_list": [ 00:12:59.452 { 00:12:59.452 "name": "spare", 00:12:59.452 "uuid": "546eaf23-e41a-5fb0-9be6-2d983b02d93e", 00:12:59.452 "is_configured": true, 00:12:59.452 "data_offset": 2048, 00:12:59.452 "data_size": 63488 00:12:59.452 }, 00:12:59.452 { 00:12:59.452 "name": "BaseBdev2", 00:12:59.452 "uuid": "a3d3bb7e-268d-5a02-a57f-c9af3b7611b5", 00:12:59.452 "is_configured": true, 00:12:59.452 "data_offset": 2048, 00:12:59.452 "data_size": 63488 00:12:59.452 } 00:12:59.452 ] 00:12:59.452 }' 00:12:59.452 03:21:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.452 [2024-11-21 03:21:46.814830] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:59.452 03:21:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:59.452 03:21:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.452 03:21:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:59.452 03:21:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:59.712 133.00 IOPS, 399.00 MiB/s [2024-11-21T03:21:47.278Z] [2024-11-21 03:21:47.138513] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:59.712 [2024-11-21 03:21:47.138908] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:59.972 [2024-11-21 03:21:47.359403] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:00.540 03:21:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:00.540 03:21:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:00.540 03:21:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.540 03:21:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:00.540 03:21:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:00.540 03:21:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.540 03:21:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.540 03:21:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.540 03:21:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.540 03:21:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.540 03:21:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.540 [2024-11-21 03:21:47.926247] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:00.540 03:21:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:00.540 "name": "raid_bdev1", 00:13:00.540 "uuid": "bb4f1999-46c0-4e6d-bc6c-32cab9c73205", 00:13:00.540 "strip_size_kb": 0, 00:13:00.540 "state": "online", 00:13:00.540 "raid_level": "raid1", 00:13:00.540 "superblock": true, 00:13:00.540 "num_base_bdevs": 2, 00:13:00.540 "num_base_bdevs_discovered": 2, 00:13:00.540 "num_base_bdevs_operational": 2, 00:13:00.540 "process": { 00:13:00.540 "type": "rebuild", 00:13:00.540 "target": "spare", 00:13:00.540 "progress": { 00:13:00.540 "blocks": 49152, 00:13:00.540 "percent": 77 00:13:00.540 } 00:13:00.540 }, 00:13:00.540 "base_bdevs_list": [ 00:13:00.540 { 00:13:00.540 "name": "spare", 00:13:00.540 "uuid": "546eaf23-e41a-5fb0-9be6-2d983b02d93e", 00:13:00.540 "is_configured": true, 00:13:00.540 "data_offset": 2048, 00:13:00.540 "data_size": 63488 00:13:00.540 }, 00:13:00.540 { 00:13:00.540 "name": "BaseBdev2", 00:13:00.540 "uuid": "a3d3bb7e-268d-5a02-a57f-c9af3b7611b5", 00:13:00.540 "is_configured": true, 00:13:00.540 "data_offset": 2048, 00:13:00.540 "data_size": 63488 00:13:00.540 } 00:13:00.540 ] 00:13:00.540 }' 00:13:00.540 03:21:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.541 03:21:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:00.541 03:21:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.541 03:21:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:00.541 03:21:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:00.801 [2024-11-21 03:21:48.135267] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:01.060 117.33 IOPS, 352.00 MiB/s [2024-11-21T03:21:48.626Z] [2024-11-21 03:21:48.456796] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:01.060 [2024-11-21 03:21:48.559616] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:01.320 [2024-11-21 03:21:48.762760] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:01.320 [2024-11-21 03:21:48.789332] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:01.320 [2024-11-21 03:21:48.791207] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.580 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:01.580 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:01.580 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.580 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:01.580 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:01.580 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.580 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.580 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.580 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.580 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.580 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.580 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.580 "name": "raid_bdev1", 00:13:01.580 "uuid": "bb4f1999-46c0-4e6d-bc6c-32cab9c73205", 00:13:01.580 "strip_size_kb": 0, 00:13:01.580 "state": "online", 00:13:01.580 "raid_level": "raid1", 00:13:01.580 "superblock": true, 00:13:01.580 "num_base_bdevs": 2, 00:13:01.580 "num_base_bdevs_discovered": 2, 00:13:01.580 "num_base_bdevs_operational": 2, 00:13:01.580 "base_bdevs_list": [ 00:13:01.580 { 00:13:01.580 "name": "spare", 00:13:01.580 "uuid": "546eaf23-e41a-5fb0-9be6-2d983b02d93e", 00:13:01.580 "is_configured": true, 00:13:01.580 "data_offset": 2048, 00:13:01.580 "data_size": 63488 00:13:01.580 }, 00:13:01.580 { 00:13:01.580 "name": "BaseBdev2", 00:13:01.580 "uuid": "a3d3bb7e-268d-5a02-a57f-c9af3b7611b5", 00:13:01.580 "is_configured": true, 00:13:01.580 "data_offset": 2048, 00:13:01.580 "data_size": 63488 00:13:01.580 } 00:13:01.580 ] 00:13:01.580 }' 00:13:01.580 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.580 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:01.580 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.841 106.43 IOPS, 319.29 MiB/s [2024-11-21T03:21:49.407Z] 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:01.841 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:01.841 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:01.841 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.841 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:01.841 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:01.841 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.841 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.841 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.841 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.841 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.841 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.841 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.841 "name": "raid_bdev1", 00:13:01.841 "uuid": "bb4f1999-46c0-4e6d-bc6c-32cab9c73205", 00:13:01.841 "strip_size_kb": 0, 00:13:01.841 "state": "online", 00:13:01.841 "raid_level": "raid1", 00:13:01.841 "superblock": true, 00:13:01.841 "num_base_bdevs": 2, 00:13:01.841 "num_base_bdevs_discovered": 2, 00:13:01.841 "num_base_bdevs_operational": 2, 00:13:01.841 "base_bdevs_list": [ 00:13:01.841 { 00:13:01.841 "name": "spare", 00:13:01.841 "uuid": "546eaf23-e41a-5fb0-9be6-2d983b02d93e", 00:13:01.841 "is_configured": true, 00:13:01.841 "data_offset": 2048, 00:13:01.841 "data_size": 63488 00:13:01.841 }, 00:13:01.841 { 00:13:01.841 "name": "BaseBdev2", 00:13:01.841 "uuid": "a3d3bb7e-268d-5a02-a57f-c9af3b7611b5", 00:13:01.841 "is_configured": true, 00:13:01.841 "data_offset": 2048, 00:13:01.841 "data_size": 63488 00:13:01.841 } 00:13:01.841 ] 00:13:01.841 }' 00:13:01.841 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.841 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:01.841 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.841 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:01.841 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:01.841 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.841 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.841 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.841 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.841 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:01.841 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.841 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.841 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.841 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.841 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.841 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.841 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.841 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.841 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.841 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.841 "name": "raid_bdev1", 00:13:01.841 "uuid": "bb4f1999-46c0-4e6d-bc6c-32cab9c73205", 00:13:01.841 "strip_size_kb": 0, 00:13:01.841 "state": "online", 00:13:01.841 "raid_level": "raid1", 00:13:01.841 "superblock": true, 00:13:01.841 "num_base_bdevs": 2, 00:13:01.841 "num_base_bdevs_discovered": 2, 00:13:01.841 "num_base_bdevs_operational": 2, 00:13:01.841 "base_bdevs_list": [ 00:13:01.841 { 00:13:01.841 "name": "spare", 00:13:01.841 "uuid": "546eaf23-e41a-5fb0-9be6-2d983b02d93e", 00:13:01.841 "is_configured": true, 00:13:01.841 "data_offset": 2048, 00:13:01.841 "data_size": 63488 00:13:01.841 }, 00:13:01.841 { 00:13:01.841 "name": "BaseBdev2", 00:13:01.841 "uuid": "a3d3bb7e-268d-5a02-a57f-c9af3b7611b5", 00:13:01.841 "is_configured": true, 00:13:01.841 "data_offset": 2048, 00:13:01.841 "data_size": 63488 00:13:01.841 } 00:13:01.841 ] 00:13:01.841 }' 00:13:01.841 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.841 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.410 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:02.410 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.410 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.410 [2024-11-21 03:21:49.723266] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:02.410 [2024-11-21 03:21:49.723301] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:02.410 00:13:02.410 Latency(us) 00:13:02.410 [2024-11-21T03:21:49.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.410 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:02.410 raid_bdev1 : 7.63 100.81 302.44 0.00 0.00 13695.87 292.75 108303.20 00:13:02.410 [2024-11-21T03:21:49.976Z] =================================================================================================================== 00:13:02.410 [2024-11-21T03:21:49.976Z] Total : 100.81 302.44 0.00 0.00 13695.87 292.75 108303.20 00:13:02.410 [2024-11-21 03:21:49.758786] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.410 [2024-11-21 03:21:49.758832] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:02.410 [2024-11-21 03:21:49.758924] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:02.410 [2024-11-21 03:21:49.758948] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:13:02.410 { 00:13:02.410 "results": [ 00:13:02.410 { 00:13:02.410 "job": "raid_bdev1", 00:13:02.410 "core_mask": "0x1", 00:13:02.410 "workload": "randrw", 00:13:02.410 "percentage": 50, 00:13:02.410 "status": "finished", 00:13:02.410 "queue_depth": 2, 00:13:02.410 "io_size": 3145728, 00:13:02.410 "runtime": 7.627921, 00:13:02.410 "iops": 100.81383905260687, 00:13:02.410 "mibps": 302.4415171578206, 00:13:02.410 "io_failed": 0, 00:13:02.410 "io_timeout": 0, 00:13:02.410 "avg_latency_us": 13695.866831347234, 00:13:02.410 "min_latency_us": 292.74993462912926, 00:13:02.410 "max_latency_us": 108303.19532816178 00:13:02.410 } 00:13:02.410 ], 00:13:02.410 "core_count": 1 00:13:02.410 } 00:13:02.410 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.410 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.410 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:02.410 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.410 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.410 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.410 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:02.410 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:02.411 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:02.411 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:02.411 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:02.411 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:02.411 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:02.411 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:02.411 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:02.411 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:02.411 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:02.411 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:02.411 03:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:02.671 /dev/nbd0 00:13:02.671 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:02.671 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:02.671 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:02.671 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:02.671 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:02.671 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:02.671 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:02.671 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:02.671 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:02.671 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:02.671 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.671 1+0 records in 00:13:02.671 1+0 records out 00:13:02.671 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273464 s, 15.0 MB/s 00:13:02.671 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.671 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:02.671 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.671 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:02.671 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:02.671 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:02.671 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:02.671 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:02.671 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:02.671 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:02.671 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:02.671 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:02.671 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:02.671 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:02.671 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:02.671 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:02.671 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:02.671 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:02.671 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:02.931 /dev/nbd1 00:13:02.931 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:02.931 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:02.931 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:02.931 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:02.931 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:02.931 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:02.931 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:02.931 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:02.931 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:02.931 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:02.931 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.931 1+0 records in 00:13:02.931 1+0 records out 00:13:02.931 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504453 s, 8.1 MB/s 00:13:02.931 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.931 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:02.931 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.931 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:02.931 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:02.931 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:02.931 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:02.931 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:02.932 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:02.932 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:02.932 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:02.932 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:02.932 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:02.932 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:02.932 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:03.191 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:03.191 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:03.191 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:03.191 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.191 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.191 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:03.191 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:03.191 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.191 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:03.191 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:03.191 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:03.191 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:03.191 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:03.191 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.191 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:03.452 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:03.452 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:03.452 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:03.452 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.452 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.452 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:03.452 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:03.452 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.452 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:03.452 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:03.452 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.452 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.452 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.452 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:03.452 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.452 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.452 [2024-11-21 03:21:50.908105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:03.452 [2024-11-21 03:21:50.908214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.452 [2024-11-21 03:21:50.908262] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:03.452 [2024-11-21 03:21:50.908294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.452 [2024-11-21 03:21:50.910471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.452 [2024-11-21 03:21:50.910544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:03.452 [2024-11-21 03:21:50.910661] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:03.452 [2024-11-21 03:21:50.910729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:03.452 [2024-11-21 03:21:50.910875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:03.452 spare 00:13:03.452 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.452 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:03.452 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.452 03:21:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.452 [2024-11-21 03:21:51.010993] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:03.452 [2024-11-21 03:21:51.011101] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:03.452 [2024-11-21 03:21:51.011479] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b4e0 00:13:03.452 [2024-11-21 03:21:51.011709] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:03.452 [2024-11-21 03:21:51.011755] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:03.452 [2024-11-21 03:21:51.011955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.452 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.452 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:03.452 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.452 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.712 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.712 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.712 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:03.712 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.712 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.712 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.712 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.712 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.712 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.712 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.712 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.712 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.712 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.712 "name": "raid_bdev1", 00:13:03.712 "uuid": "bb4f1999-46c0-4e6d-bc6c-32cab9c73205", 00:13:03.712 "strip_size_kb": 0, 00:13:03.712 "state": "online", 00:13:03.712 "raid_level": "raid1", 00:13:03.712 "superblock": true, 00:13:03.712 "num_base_bdevs": 2, 00:13:03.712 "num_base_bdevs_discovered": 2, 00:13:03.712 "num_base_bdevs_operational": 2, 00:13:03.712 "base_bdevs_list": [ 00:13:03.712 { 00:13:03.712 "name": "spare", 00:13:03.712 "uuid": "546eaf23-e41a-5fb0-9be6-2d983b02d93e", 00:13:03.712 "is_configured": true, 00:13:03.712 "data_offset": 2048, 00:13:03.712 "data_size": 63488 00:13:03.712 }, 00:13:03.712 { 00:13:03.712 "name": "BaseBdev2", 00:13:03.712 "uuid": "a3d3bb7e-268d-5a02-a57f-c9af3b7611b5", 00:13:03.712 "is_configured": true, 00:13:03.712 "data_offset": 2048, 00:13:03.712 "data_size": 63488 00:13:03.712 } 00:13:03.712 ] 00:13:03.712 }' 00:13:03.712 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.712 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.971 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:03.971 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.971 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:03.971 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:03.971 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.971 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.971 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.971 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.971 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.971 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.971 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.972 "name": "raid_bdev1", 00:13:03.972 "uuid": "bb4f1999-46c0-4e6d-bc6c-32cab9c73205", 00:13:03.972 "strip_size_kb": 0, 00:13:03.972 "state": "online", 00:13:03.972 "raid_level": "raid1", 00:13:03.972 "superblock": true, 00:13:03.972 "num_base_bdevs": 2, 00:13:03.972 "num_base_bdevs_discovered": 2, 00:13:03.972 "num_base_bdevs_operational": 2, 00:13:03.972 "base_bdevs_list": [ 00:13:03.972 { 00:13:03.972 "name": "spare", 00:13:03.972 "uuid": "546eaf23-e41a-5fb0-9be6-2d983b02d93e", 00:13:03.972 "is_configured": true, 00:13:03.972 "data_offset": 2048, 00:13:03.972 "data_size": 63488 00:13:03.972 }, 00:13:03.972 { 00:13:03.972 "name": "BaseBdev2", 00:13:03.972 "uuid": "a3d3bb7e-268d-5a02-a57f-c9af3b7611b5", 00:13:03.972 "is_configured": true, 00:13:03.972 "data_offset": 2048, 00:13:03.972 "data_size": 63488 00:13:03.972 } 00:13:03.972 ] 00:13:03.972 }' 00:13:03.972 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.232 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:04.232 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.232 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:04.232 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.232 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:04.232 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.232 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.232 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.232 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:04.232 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:04.232 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.232 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.232 [2024-11-21 03:21:51.660437] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:04.232 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.232 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:04.232 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.232 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.232 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.232 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.232 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:04.232 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.232 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.232 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.232 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.232 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.232 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.232 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.232 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.232 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.232 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.232 "name": "raid_bdev1", 00:13:04.232 "uuid": "bb4f1999-46c0-4e6d-bc6c-32cab9c73205", 00:13:04.232 "strip_size_kb": 0, 00:13:04.232 "state": "online", 00:13:04.232 "raid_level": "raid1", 00:13:04.232 "superblock": true, 00:13:04.232 "num_base_bdevs": 2, 00:13:04.232 "num_base_bdevs_discovered": 1, 00:13:04.232 "num_base_bdevs_operational": 1, 00:13:04.232 "base_bdevs_list": [ 00:13:04.232 { 00:13:04.232 "name": null, 00:13:04.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.232 "is_configured": false, 00:13:04.232 "data_offset": 0, 00:13:04.232 "data_size": 63488 00:13:04.232 }, 00:13:04.232 { 00:13:04.232 "name": "BaseBdev2", 00:13:04.232 "uuid": "a3d3bb7e-268d-5a02-a57f-c9af3b7611b5", 00:13:04.232 "is_configured": true, 00:13:04.232 "data_offset": 2048, 00:13:04.232 "data_size": 63488 00:13:04.232 } 00:13:04.232 ] 00:13:04.232 }' 00:13:04.232 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.232 03:21:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.801 03:21:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:04.801 03:21:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.801 03:21:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.801 [2024-11-21 03:21:52.104698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:04.801 [2024-11-21 03:21:52.105000] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:04.801 [2024-11-21 03:21:52.105086] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:04.801 [2024-11-21 03:21:52.105184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:04.801 [2024-11-21 03:21:52.110564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:13:04.801 03:21:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.801 03:21:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:04.801 [2024-11-21 03:21:52.112818] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:05.741 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:05.741 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.741 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:05.741 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:05.741 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.741 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.741 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.741 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.741 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.741 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.741 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.741 "name": "raid_bdev1", 00:13:05.741 "uuid": "bb4f1999-46c0-4e6d-bc6c-32cab9c73205", 00:13:05.741 "strip_size_kb": 0, 00:13:05.741 "state": "online", 00:13:05.741 "raid_level": "raid1", 00:13:05.741 "superblock": true, 00:13:05.741 "num_base_bdevs": 2, 00:13:05.741 "num_base_bdevs_discovered": 2, 00:13:05.741 "num_base_bdevs_operational": 2, 00:13:05.741 "process": { 00:13:05.741 "type": "rebuild", 00:13:05.741 "target": "spare", 00:13:05.741 "progress": { 00:13:05.741 "blocks": 20480, 00:13:05.741 "percent": 32 00:13:05.741 } 00:13:05.741 }, 00:13:05.741 "base_bdevs_list": [ 00:13:05.741 { 00:13:05.741 "name": "spare", 00:13:05.741 "uuid": "546eaf23-e41a-5fb0-9be6-2d983b02d93e", 00:13:05.741 "is_configured": true, 00:13:05.741 "data_offset": 2048, 00:13:05.741 "data_size": 63488 00:13:05.741 }, 00:13:05.741 { 00:13:05.741 "name": "BaseBdev2", 00:13:05.741 "uuid": "a3d3bb7e-268d-5a02-a57f-c9af3b7611b5", 00:13:05.741 "is_configured": true, 00:13:05.741 "data_offset": 2048, 00:13:05.741 "data_size": 63488 00:13:05.741 } 00:13:05.741 ] 00:13:05.741 }' 00:13:05.741 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.741 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:05.741 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.741 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:05.741 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:05.741 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.741 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.741 [2024-11-21 03:21:53.279464] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:06.000 [2024-11-21 03:21:53.319860] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:06.000 [2024-11-21 03:21:53.320061] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.000 [2024-11-21 03:21:53.320081] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:06.000 [2024-11-21 03:21:53.320093] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:06.000 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.000 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:06.000 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.000 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.000 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.000 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.000 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:06.000 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.000 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.000 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.000 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.000 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.000 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.000 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.000 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.000 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.000 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.000 "name": "raid_bdev1", 00:13:06.000 "uuid": "bb4f1999-46c0-4e6d-bc6c-32cab9c73205", 00:13:06.000 "strip_size_kb": 0, 00:13:06.000 "state": "online", 00:13:06.000 "raid_level": "raid1", 00:13:06.000 "superblock": true, 00:13:06.000 "num_base_bdevs": 2, 00:13:06.000 "num_base_bdevs_discovered": 1, 00:13:06.000 "num_base_bdevs_operational": 1, 00:13:06.000 "base_bdevs_list": [ 00:13:06.000 { 00:13:06.000 "name": null, 00:13:06.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.000 "is_configured": false, 00:13:06.000 "data_offset": 0, 00:13:06.000 "data_size": 63488 00:13:06.000 }, 00:13:06.000 { 00:13:06.000 "name": "BaseBdev2", 00:13:06.000 "uuid": "a3d3bb7e-268d-5a02-a57f-c9af3b7611b5", 00:13:06.000 "is_configured": true, 00:13:06.000 "data_offset": 2048, 00:13:06.000 "data_size": 63488 00:13:06.000 } 00:13:06.000 ] 00:13:06.000 }' 00:13:06.000 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.000 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.259 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:06.259 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.259 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.259 [2024-11-21 03:21:53.721479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:06.259 [2024-11-21 03:21:53.721656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.259 [2024-11-21 03:21:53.721709] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:06.259 [2024-11-21 03:21:53.721758] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.259 [2024-11-21 03:21:53.722282] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.259 [2024-11-21 03:21:53.722351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:06.259 [2024-11-21 03:21:53.722482] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:06.259 [2024-11-21 03:21:53.722531] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:06.259 [2024-11-21 03:21:53.722584] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:06.259 [2024-11-21 03:21:53.722655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:06.259 [2024-11-21 03:21:53.728082] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:13:06.259 spare 00:13:06.259 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.259 03:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:06.259 [2024-11-21 03:21:53.730310] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:07.196 03:21:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:07.196 03:21:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.196 03:21:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:07.196 03:21:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:07.196 03:21:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.196 03:21:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.196 03:21:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.196 03:21:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.196 03:21:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.456 03:21:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.456 03:21:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.456 "name": "raid_bdev1", 00:13:07.456 "uuid": "bb4f1999-46c0-4e6d-bc6c-32cab9c73205", 00:13:07.456 "strip_size_kb": 0, 00:13:07.456 "state": "online", 00:13:07.456 "raid_level": "raid1", 00:13:07.456 "superblock": true, 00:13:07.456 "num_base_bdevs": 2, 00:13:07.456 "num_base_bdevs_discovered": 2, 00:13:07.456 "num_base_bdevs_operational": 2, 00:13:07.456 "process": { 00:13:07.456 "type": "rebuild", 00:13:07.456 "target": "spare", 00:13:07.456 "progress": { 00:13:07.456 "blocks": 20480, 00:13:07.456 "percent": 32 00:13:07.456 } 00:13:07.456 }, 00:13:07.456 "base_bdevs_list": [ 00:13:07.456 { 00:13:07.456 "name": "spare", 00:13:07.456 "uuid": "546eaf23-e41a-5fb0-9be6-2d983b02d93e", 00:13:07.456 "is_configured": true, 00:13:07.456 "data_offset": 2048, 00:13:07.456 "data_size": 63488 00:13:07.456 }, 00:13:07.456 { 00:13:07.456 "name": "BaseBdev2", 00:13:07.456 "uuid": "a3d3bb7e-268d-5a02-a57f-c9af3b7611b5", 00:13:07.456 "is_configured": true, 00:13:07.456 "data_offset": 2048, 00:13:07.456 "data_size": 63488 00:13:07.456 } 00:13:07.456 ] 00:13:07.456 }' 00:13:07.456 03:21:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.456 03:21:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:07.456 03:21:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.456 03:21:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:07.456 03:21:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:07.456 03:21:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.456 03:21:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.456 [2024-11-21 03:21:54.856893] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:07.456 [2024-11-21 03:21:54.937345] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:07.456 [2024-11-21 03:21:54.937418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.456 [2024-11-21 03:21:54.937456] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:07.456 [2024-11-21 03:21:54.937464] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:07.456 03:21:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.456 03:21:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:07.456 03:21:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:07.456 03:21:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.456 03:21:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:07.456 03:21:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:07.456 03:21:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:07.456 03:21:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.456 03:21:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.456 03:21:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.456 03:21:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.456 03:21:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.456 03:21:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.456 03:21:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.456 03:21:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.456 03:21:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.456 03:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.456 "name": "raid_bdev1", 00:13:07.456 "uuid": "bb4f1999-46c0-4e6d-bc6c-32cab9c73205", 00:13:07.456 "strip_size_kb": 0, 00:13:07.456 "state": "online", 00:13:07.456 "raid_level": "raid1", 00:13:07.456 "superblock": true, 00:13:07.456 "num_base_bdevs": 2, 00:13:07.456 "num_base_bdevs_discovered": 1, 00:13:07.456 "num_base_bdevs_operational": 1, 00:13:07.456 "base_bdevs_list": [ 00:13:07.456 { 00:13:07.456 "name": null, 00:13:07.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.456 "is_configured": false, 00:13:07.456 "data_offset": 0, 00:13:07.456 "data_size": 63488 00:13:07.456 }, 00:13:07.456 { 00:13:07.456 "name": "BaseBdev2", 00:13:07.456 "uuid": "a3d3bb7e-268d-5a02-a57f-c9af3b7611b5", 00:13:07.456 "is_configured": true, 00:13:07.456 "data_offset": 2048, 00:13:07.456 "data_size": 63488 00:13:07.456 } 00:13:07.456 ] 00:13:07.456 }' 00:13:07.456 03:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.456 03:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.026 03:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:08.026 03:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.027 03:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:08.027 03:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:08.027 03:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.027 03:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.027 03:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.027 03:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.027 03:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.027 03:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.027 03:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.027 "name": "raid_bdev1", 00:13:08.027 "uuid": "bb4f1999-46c0-4e6d-bc6c-32cab9c73205", 00:13:08.027 "strip_size_kb": 0, 00:13:08.027 "state": "online", 00:13:08.027 "raid_level": "raid1", 00:13:08.027 "superblock": true, 00:13:08.027 "num_base_bdevs": 2, 00:13:08.027 "num_base_bdevs_discovered": 1, 00:13:08.027 "num_base_bdevs_operational": 1, 00:13:08.027 "base_bdevs_list": [ 00:13:08.027 { 00:13:08.027 "name": null, 00:13:08.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.027 "is_configured": false, 00:13:08.027 "data_offset": 0, 00:13:08.027 "data_size": 63488 00:13:08.027 }, 00:13:08.027 { 00:13:08.027 "name": "BaseBdev2", 00:13:08.027 "uuid": "a3d3bb7e-268d-5a02-a57f-c9af3b7611b5", 00:13:08.027 "is_configured": true, 00:13:08.027 "data_offset": 2048, 00:13:08.027 "data_size": 63488 00:13:08.027 } 00:13:08.027 ] 00:13:08.027 }' 00:13:08.027 03:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.027 03:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:08.027 03:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.027 03:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:08.027 03:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:08.027 03:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.027 03:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.027 03:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.027 03:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:08.027 03:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.027 03:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.027 [2024-11-21 03:21:55.530895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:08.027 [2024-11-21 03:21:55.530981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.027 [2024-11-21 03:21:55.531005] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:08.027 [2024-11-21 03:21:55.531025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.027 [2024-11-21 03:21:55.531450] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.027 [2024-11-21 03:21:55.531469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:08.027 [2024-11-21 03:21:55.531549] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:08.027 [2024-11-21 03:21:55.531566] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:08.027 [2024-11-21 03:21:55.531576] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:08.027 [2024-11-21 03:21:55.531586] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:08.027 BaseBdev1 00:13:08.027 03:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.027 03:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:09.408 03:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:09.408 03:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.408 03:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.408 03:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.408 03:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.408 03:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:09.408 03:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.408 03:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.408 03:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.408 03:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.408 03:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.408 03:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.408 03:21:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.408 03:21:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.408 03:21:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.408 03:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.408 "name": "raid_bdev1", 00:13:09.408 "uuid": "bb4f1999-46c0-4e6d-bc6c-32cab9c73205", 00:13:09.408 "strip_size_kb": 0, 00:13:09.408 "state": "online", 00:13:09.408 "raid_level": "raid1", 00:13:09.408 "superblock": true, 00:13:09.408 "num_base_bdevs": 2, 00:13:09.408 "num_base_bdevs_discovered": 1, 00:13:09.408 "num_base_bdevs_operational": 1, 00:13:09.408 "base_bdevs_list": [ 00:13:09.408 { 00:13:09.408 "name": null, 00:13:09.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.408 "is_configured": false, 00:13:09.408 "data_offset": 0, 00:13:09.408 "data_size": 63488 00:13:09.408 }, 00:13:09.408 { 00:13:09.408 "name": "BaseBdev2", 00:13:09.408 "uuid": "a3d3bb7e-268d-5a02-a57f-c9af3b7611b5", 00:13:09.408 "is_configured": true, 00:13:09.408 "data_offset": 2048, 00:13:09.408 "data_size": 63488 00:13:09.408 } 00:13:09.408 ] 00:13:09.408 }' 00:13:09.408 03:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.408 03:21:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.668 03:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:09.668 03:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.668 03:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:09.668 03:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:09.668 03:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.668 03:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.668 03:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.668 03:21:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.668 03:21:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.668 03:21:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.668 03:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.668 "name": "raid_bdev1", 00:13:09.668 "uuid": "bb4f1999-46c0-4e6d-bc6c-32cab9c73205", 00:13:09.668 "strip_size_kb": 0, 00:13:09.668 "state": "online", 00:13:09.668 "raid_level": "raid1", 00:13:09.668 "superblock": true, 00:13:09.668 "num_base_bdevs": 2, 00:13:09.668 "num_base_bdevs_discovered": 1, 00:13:09.668 "num_base_bdevs_operational": 1, 00:13:09.668 "base_bdevs_list": [ 00:13:09.668 { 00:13:09.668 "name": null, 00:13:09.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.668 "is_configured": false, 00:13:09.668 "data_offset": 0, 00:13:09.668 "data_size": 63488 00:13:09.668 }, 00:13:09.668 { 00:13:09.668 "name": "BaseBdev2", 00:13:09.668 "uuid": "a3d3bb7e-268d-5a02-a57f-c9af3b7611b5", 00:13:09.668 "is_configured": true, 00:13:09.668 "data_offset": 2048, 00:13:09.668 "data_size": 63488 00:13:09.668 } 00:13:09.668 ] 00:13:09.668 }' 00:13:09.668 03:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.668 03:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:09.668 03:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.668 03:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:09.668 03:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:09.668 03:21:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:13:09.668 03:21:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:09.668 03:21:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:09.668 03:21:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:09.668 03:21:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:09.668 03:21:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:09.668 03:21:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:09.668 03:21:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.668 03:21:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.668 [2024-11-21 03:21:57.167630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:09.668 [2024-11-21 03:21:57.167834] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:09.668 [2024-11-21 03:21:57.167854] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:09.668 request: 00:13:09.668 { 00:13:09.668 "base_bdev": "BaseBdev1", 00:13:09.668 "raid_bdev": "raid_bdev1", 00:13:09.668 "method": "bdev_raid_add_base_bdev", 00:13:09.668 "req_id": 1 00:13:09.668 } 00:13:09.668 Got JSON-RPC error response 00:13:09.668 response: 00:13:09.668 { 00:13:09.668 "code": -22, 00:13:09.668 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:09.669 } 00:13:09.669 03:21:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:09.669 03:21:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:13:09.669 03:21:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:09.669 03:21:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:09.669 03:21:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:09.669 03:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:11.048 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:11.048 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.048 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.048 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.048 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.048 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:11.048 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.048 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.048 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.048 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.048 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.048 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.048 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.048 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.048 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.048 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.048 "name": "raid_bdev1", 00:13:11.048 "uuid": "bb4f1999-46c0-4e6d-bc6c-32cab9c73205", 00:13:11.048 "strip_size_kb": 0, 00:13:11.048 "state": "online", 00:13:11.048 "raid_level": "raid1", 00:13:11.048 "superblock": true, 00:13:11.048 "num_base_bdevs": 2, 00:13:11.048 "num_base_bdevs_discovered": 1, 00:13:11.048 "num_base_bdevs_operational": 1, 00:13:11.048 "base_bdevs_list": [ 00:13:11.048 { 00:13:11.048 "name": null, 00:13:11.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.048 "is_configured": false, 00:13:11.048 "data_offset": 0, 00:13:11.048 "data_size": 63488 00:13:11.048 }, 00:13:11.048 { 00:13:11.048 "name": "BaseBdev2", 00:13:11.048 "uuid": "a3d3bb7e-268d-5a02-a57f-c9af3b7611b5", 00:13:11.048 "is_configured": true, 00:13:11.048 "data_offset": 2048, 00:13:11.048 "data_size": 63488 00:13:11.048 } 00:13:11.048 ] 00:13:11.048 }' 00:13:11.048 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.048 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.308 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:11.308 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.308 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:11.308 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:11.308 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.308 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.308 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.308 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.308 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.308 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.308 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.308 "name": "raid_bdev1", 00:13:11.308 "uuid": "bb4f1999-46c0-4e6d-bc6c-32cab9c73205", 00:13:11.308 "strip_size_kb": 0, 00:13:11.308 "state": "online", 00:13:11.308 "raid_level": "raid1", 00:13:11.308 "superblock": true, 00:13:11.308 "num_base_bdevs": 2, 00:13:11.308 "num_base_bdevs_discovered": 1, 00:13:11.308 "num_base_bdevs_operational": 1, 00:13:11.308 "base_bdevs_list": [ 00:13:11.308 { 00:13:11.308 "name": null, 00:13:11.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.309 "is_configured": false, 00:13:11.309 "data_offset": 0, 00:13:11.309 "data_size": 63488 00:13:11.309 }, 00:13:11.309 { 00:13:11.309 "name": "BaseBdev2", 00:13:11.309 "uuid": "a3d3bb7e-268d-5a02-a57f-c9af3b7611b5", 00:13:11.309 "is_configured": true, 00:13:11.309 "data_offset": 2048, 00:13:11.309 "data_size": 63488 00:13:11.309 } 00:13:11.309 ] 00:13:11.309 }' 00:13:11.309 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.309 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:11.309 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.309 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:11.309 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 89563 00:13:11.309 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 89563 ']' 00:13:11.309 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 89563 00:13:11.309 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:13:11.309 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:11.309 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89563 00:13:11.309 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:11.309 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:11.309 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89563' 00:13:11.309 killing process with pid 89563 00:13:11.309 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 89563 00:13:11.309 Received shutdown signal, test time was about 16.668260 seconds 00:13:11.309 00:13:11.309 Latency(us) 00:13:11.309 [2024-11-21T03:21:58.875Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:11.309 [2024-11-21T03:21:58.875Z] =================================================================================================================== 00:13:11.309 [2024-11-21T03:21:58.875Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:11.309 [2024-11-21 03:21:58.796226] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:11.309 03:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 89563 00:13:11.309 [2024-11-21 03:21:58.796394] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:11.309 [2024-11-21 03:21:58.796456] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:11.309 [2024-11-21 03:21:58.796470] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:11.309 [2024-11-21 03:21:58.823710] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:11.569 00:13:11.569 real 0m18.650s 00:13:11.569 user 0m24.767s 00:13:11.569 sys 0m2.235s 00:13:11.569 ************************************ 00:13:11.569 END TEST raid_rebuild_test_sb_io 00:13:11.569 ************************************ 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.569 03:21:59 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:11.569 03:21:59 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:11.569 03:21:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:11.569 03:21:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:11.569 03:21:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:11.569 ************************************ 00:13:11.569 START TEST raid_rebuild_test 00:13:11.569 ************************************ 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=90235 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 90235 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 90235 ']' 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:11.569 03:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.842 [2024-11-21 03:21:59.213625] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:13:11.842 [2024-11-21 03:21:59.213851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90235 ] 00:13:11.842 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:11.842 Zero copy mechanism will not be used. 00:13:11.842 [2024-11-21 03:21:59.349505] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:11.842 [2024-11-21 03:21:59.380223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.101 [2024-11-21 03:21:59.410528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.101 [2024-11-21 03:21:59.454123] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:12.101 [2024-11-21 03:21:59.454161] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:12.668 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:12.668 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:12.668 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:12.668 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:12.668 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.668 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.668 BaseBdev1_malloc 00:13:12.668 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.668 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:12.668 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.668 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.668 [2024-11-21 03:22:00.070520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:12.668 [2024-11-21 03:22:00.070691] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.668 [2024-11-21 03:22:00.070731] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:12.668 [2024-11-21 03:22:00.070747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.668 [2024-11-21 03:22:00.073243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.668 [2024-11-21 03:22:00.073285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:12.668 BaseBdev1 00:13:12.668 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.668 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:12.668 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:12.668 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.668 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.668 BaseBdev2_malloc 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.669 [2024-11-21 03:22:00.099490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:12.669 [2024-11-21 03:22:00.099649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.669 [2024-11-21 03:22:00.099674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:12.669 [2024-11-21 03:22:00.099685] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.669 [2024-11-21 03:22:00.102075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.669 [2024-11-21 03:22:00.102118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:12.669 BaseBdev2 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.669 BaseBdev3_malloc 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.669 [2024-11-21 03:22:00.128528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:12.669 [2024-11-21 03:22:00.128602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.669 [2024-11-21 03:22:00.128624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:12.669 [2024-11-21 03:22:00.128635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.669 [2024-11-21 03:22:00.130944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.669 [2024-11-21 03:22:00.131093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:12.669 BaseBdev3 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.669 BaseBdev4_malloc 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.669 [2024-11-21 03:22:00.167678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:12.669 [2024-11-21 03:22:00.167834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.669 [2024-11-21 03:22:00.167864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:12.669 [2024-11-21 03:22:00.167876] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.669 [2024-11-21 03:22:00.170278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.669 [2024-11-21 03:22:00.170320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:12.669 BaseBdev4 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.669 spare_malloc 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.669 spare_delay 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.669 [2024-11-21 03:22:00.208604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:12.669 [2024-11-21 03:22:00.208697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.669 [2024-11-21 03:22:00.208725] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:12.669 [2024-11-21 03:22:00.208739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.669 [2024-11-21 03:22:00.210874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.669 [2024-11-21 03:22:00.211003] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:12.669 spare 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.669 [2024-11-21 03:22:00.220724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:12.669 [2024-11-21 03:22:00.222881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:12.669 [2024-11-21 03:22:00.222978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:12.669 [2024-11-21 03:22:00.223042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:12.669 [2024-11-21 03:22:00.223127] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:13:12.669 [2024-11-21 03:22:00.223144] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:12.669 [2024-11-21 03:22:00.223418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:12.669 [2024-11-21 03:22:00.223592] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:13:12.669 [2024-11-21 03:22:00.223611] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:13:12.669 [2024-11-21 03:22:00.223766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.669 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.927 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.927 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.927 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.927 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.927 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.927 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.927 "name": "raid_bdev1", 00:13:12.927 "uuid": "48aa6a8a-69d0-4c8c-8b2b-47abca2c4511", 00:13:12.927 "strip_size_kb": 0, 00:13:12.927 "state": "online", 00:13:12.927 "raid_level": "raid1", 00:13:12.927 "superblock": false, 00:13:12.927 "num_base_bdevs": 4, 00:13:12.927 "num_base_bdevs_discovered": 4, 00:13:12.927 "num_base_bdevs_operational": 4, 00:13:12.927 "base_bdevs_list": [ 00:13:12.927 { 00:13:12.927 "name": "BaseBdev1", 00:13:12.927 "uuid": "59220e3f-97ad-5678-8fa7-284700b75c6d", 00:13:12.927 "is_configured": true, 00:13:12.927 "data_offset": 0, 00:13:12.927 "data_size": 65536 00:13:12.927 }, 00:13:12.927 { 00:13:12.927 "name": "BaseBdev2", 00:13:12.927 "uuid": "eb0a72b9-d654-56ea-acea-bc9f6ffbd26b", 00:13:12.927 "is_configured": true, 00:13:12.927 "data_offset": 0, 00:13:12.927 "data_size": 65536 00:13:12.927 }, 00:13:12.927 { 00:13:12.927 "name": "BaseBdev3", 00:13:12.927 "uuid": "75489508-1920-5dca-9fec-08760cd16f5f", 00:13:12.927 "is_configured": true, 00:13:12.927 "data_offset": 0, 00:13:12.927 "data_size": 65536 00:13:12.927 }, 00:13:12.927 { 00:13:12.927 "name": "BaseBdev4", 00:13:12.927 "uuid": "cc746c2a-f11b-574e-a02c-c2d384fef02d", 00:13:12.927 "is_configured": true, 00:13:12.927 "data_offset": 0, 00:13:12.927 "data_size": 65536 00:13:12.927 } 00:13:12.927 ] 00:13:12.927 }' 00:13:12.927 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.927 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.185 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:13.186 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.186 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.186 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:13.186 [2024-11-21 03:22:00.641105] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:13.186 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.186 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:13.186 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.186 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.186 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.186 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:13.186 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.186 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:13.186 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:13.186 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:13.186 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:13.186 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:13.186 03:22:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:13.186 03:22:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:13.186 03:22:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:13.186 03:22:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:13.186 03:22:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:13.186 03:22:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:13.186 03:22:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:13.186 03:22:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:13.186 03:22:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:13.444 [2024-11-21 03:22:00.928945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:13.444 /dev/nbd0 00:13:13.444 03:22:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:13.444 03:22:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:13.444 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:13.444 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:13.444 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:13.444 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:13.444 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:13.444 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:13.444 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:13.444 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:13.444 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:13.444 1+0 records in 00:13:13.444 1+0 records out 00:13:13.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328101 s, 12.5 MB/s 00:13:13.444 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.444 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:13.444 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.444 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:13.444 03:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:13.444 03:22:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:13.444 03:22:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:13.444 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:13.444 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:13.444 03:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:20.030 65536+0 records in 00:13:20.030 65536+0 records out 00:13:20.030 33554432 bytes (34 MB, 32 MiB) copied, 5.82321 s, 5.8 MB/s 00:13:20.030 03:22:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:20.030 03:22:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:20.030 03:22:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:20.030 03:22:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:20.030 03:22:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:20.030 03:22:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:20.030 03:22:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:20.030 [2024-11-21 03:22:07.020503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.030 03:22:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:20.030 03:22:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:20.030 03:22:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:20.030 03:22:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:20.030 03:22:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:20.030 03:22:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:20.030 03:22:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:20.030 03:22:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:20.030 03:22:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:20.030 03:22:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.030 03:22:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.030 [2024-11-21 03:22:07.056981] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:20.030 03:22:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.030 03:22:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:20.030 03:22:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:20.030 03:22:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.030 03:22:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.030 03:22:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.030 03:22:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:20.030 03:22:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.030 03:22:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.030 03:22:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.030 03:22:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.030 03:22:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.030 03:22:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.030 03:22:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.030 03:22:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.030 03:22:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.030 03:22:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.030 "name": "raid_bdev1", 00:13:20.030 "uuid": "48aa6a8a-69d0-4c8c-8b2b-47abca2c4511", 00:13:20.030 "strip_size_kb": 0, 00:13:20.030 "state": "online", 00:13:20.030 "raid_level": "raid1", 00:13:20.030 "superblock": false, 00:13:20.030 "num_base_bdevs": 4, 00:13:20.030 "num_base_bdevs_discovered": 3, 00:13:20.030 "num_base_bdevs_operational": 3, 00:13:20.030 "base_bdevs_list": [ 00:13:20.030 { 00:13:20.030 "name": null, 00:13:20.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.030 "is_configured": false, 00:13:20.030 "data_offset": 0, 00:13:20.030 "data_size": 65536 00:13:20.030 }, 00:13:20.030 { 00:13:20.030 "name": "BaseBdev2", 00:13:20.030 "uuid": "eb0a72b9-d654-56ea-acea-bc9f6ffbd26b", 00:13:20.030 "is_configured": true, 00:13:20.030 "data_offset": 0, 00:13:20.030 "data_size": 65536 00:13:20.030 }, 00:13:20.030 { 00:13:20.030 "name": "BaseBdev3", 00:13:20.030 "uuid": "75489508-1920-5dca-9fec-08760cd16f5f", 00:13:20.030 "is_configured": true, 00:13:20.030 "data_offset": 0, 00:13:20.030 "data_size": 65536 00:13:20.030 }, 00:13:20.030 { 00:13:20.030 "name": "BaseBdev4", 00:13:20.030 "uuid": "cc746c2a-f11b-574e-a02c-c2d384fef02d", 00:13:20.030 "is_configured": true, 00:13:20.030 "data_offset": 0, 00:13:20.030 "data_size": 65536 00:13:20.030 } 00:13:20.030 ] 00:13:20.030 }' 00:13:20.030 03:22:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.030 03:22:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.030 03:22:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:20.030 03:22:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.030 03:22:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.031 [2024-11-21 03:22:07.485135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:20.031 [2024-11-21 03:22:07.489396] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0a180 00:13:20.031 03:22:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.031 03:22:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:20.031 [2024-11-21 03:22:07.491506] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:20.970 03:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:20.970 03:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.970 03:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:20.970 03:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:20.970 03:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.970 03:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.970 03:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.970 03:22:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.970 03:22:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.970 03:22:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.230 03:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.230 "name": "raid_bdev1", 00:13:21.230 "uuid": "48aa6a8a-69d0-4c8c-8b2b-47abca2c4511", 00:13:21.230 "strip_size_kb": 0, 00:13:21.230 "state": "online", 00:13:21.230 "raid_level": "raid1", 00:13:21.230 "superblock": false, 00:13:21.230 "num_base_bdevs": 4, 00:13:21.230 "num_base_bdevs_discovered": 4, 00:13:21.230 "num_base_bdevs_operational": 4, 00:13:21.230 "process": { 00:13:21.230 "type": "rebuild", 00:13:21.230 "target": "spare", 00:13:21.230 "progress": { 00:13:21.230 "blocks": 20480, 00:13:21.230 "percent": 31 00:13:21.230 } 00:13:21.230 }, 00:13:21.230 "base_bdevs_list": [ 00:13:21.230 { 00:13:21.230 "name": "spare", 00:13:21.230 "uuid": "00fb5b67-5a6c-572f-973b-2e25a6a90b8b", 00:13:21.230 "is_configured": true, 00:13:21.230 "data_offset": 0, 00:13:21.230 "data_size": 65536 00:13:21.230 }, 00:13:21.230 { 00:13:21.230 "name": "BaseBdev2", 00:13:21.230 "uuid": "eb0a72b9-d654-56ea-acea-bc9f6ffbd26b", 00:13:21.230 "is_configured": true, 00:13:21.230 "data_offset": 0, 00:13:21.230 "data_size": 65536 00:13:21.230 }, 00:13:21.230 { 00:13:21.230 "name": "BaseBdev3", 00:13:21.230 "uuid": "75489508-1920-5dca-9fec-08760cd16f5f", 00:13:21.230 "is_configured": true, 00:13:21.230 "data_offset": 0, 00:13:21.230 "data_size": 65536 00:13:21.230 }, 00:13:21.230 { 00:13:21.230 "name": "BaseBdev4", 00:13:21.230 "uuid": "cc746c2a-f11b-574e-a02c-c2d384fef02d", 00:13:21.230 "is_configured": true, 00:13:21.230 "data_offset": 0, 00:13:21.230 "data_size": 65536 00:13:21.230 } 00:13:21.230 ] 00:13:21.230 }' 00:13:21.230 03:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.230 03:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:21.230 03:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.230 03:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:21.230 03:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:21.230 03:22:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.230 03:22:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.230 [2024-11-21 03:22:08.646825] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:21.230 [2024-11-21 03:22:08.698796] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:21.230 [2024-11-21 03:22:08.698872] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.230 [2024-11-21 03:22:08.698892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:21.230 [2024-11-21 03:22:08.698904] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:21.230 03:22:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.230 03:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:21.230 03:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:21.230 03:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.230 03:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.230 03:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.230 03:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:21.230 03:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.230 03:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.230 03:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.230 03:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.230 03:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.230 03:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.230 03:22:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.230 03:22:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.230 03:22:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.230 03:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.230 "name": "raid_bdev1", 00:13:21.230 "uuid": "48aa6a8a-69d0-4c8c-8b2b-47abca2c4511", 00:13:21.230 "strip_size_kb": 0, 00:13:21.230 "state": "online", 00:13:21.230 "raid_level": "raid1", 00:13:21.230 "superblock": false, 00:13:21.230 "num_base_bdevs": 4, 00:13:21.230 "num_base_bdevs_discovered": 3, 00:13:21.230 "num_base_bdevs_operational": 3, 00:13:21.230 "base_bdevs_list": [ 00:13:21.230 { 00:13:21.230 "name": null, 00:13:21.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.230 "is_configured": false, 00:13:21.230 "data_offset": 0, 00:13:21.230 "data_size": 65536 00:13:21.230 }, 00:13:21.230 { 00:13:21.230 "name": "BaseBdev2", 00:13:21.230 "uuid": "eb0a72b9-d654-56ea-acea-bc9f6ffbd26b", 00:13:21.230 "is_configured": true, 00:13:21.230 "data_offset": 0, 00:13:21.230 "data_size": 65536 00:13:21.230 }, 00:13:21.230 { 00:13:21.230 "name": "BaseBdev3", 00:13:21.230 "uuid": "75489508-1920-5dca-9fec-08760cd16f5f", 00:13:21.230 "is_configured": true, 00:13:21.230 "data_offset": 0, 00:13:21.230 "data_size": 65536 00:13:21.230 }, 00:13:21.230 { 00:13:21.230 "name": "BaseBdev4", 00:13:21.230 "uuid": "cc746c2a-f11b-574e-a02c-c2d384fef02d", 00:13:21.230 "is_configured": true, 00:13:21.230 "data_offset": 0, 00:13:21.230 "data_size": 65536 00:13:21.230 } 00:13:21.230 ] 00:13:21.230 }' 00:13:21.230 03:22:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.230 03:22:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.800 03:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:21.800 03:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.800 03:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:21.800 03:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:21.800 03:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.800 03:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.800 03:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.800 03:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.800 03:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.800 03:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.800 03:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.800 "name": "raid_bdev1", 00:13:21.800 "uuid": "48aa6a8a-69d0-4c8c-8b2b-47abca2c4511", 00:13:21.800 "strip_size_kb": 0, 00:13:21.800 "state": "online", 00:13:21.800 "raid_level": "raid1", 00:13:21.800 "superblock": false, 00:13:21.800 "num_base_bdevs": 4, 00:13:21.800 "num_base_bdevs_discovered": 3, 00:13:21.800 "num_base_bdevs_operational": 3, 00:13:21.800 "base_bdevs_list": [ 00:13:21.800 { 00:13:21.800 "name": null, 00:13:21.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.800 "is_configured": false, 00:13:21.800 "data_offset": 0, 00:13:21.800 "data_size": 65536 00:13:21.800 }, 00:13:21.800 { 00:13:21.800 "name": "BaseBdev2", 00:13:21.800 "uuid": "eb0a72b9-d654-56ea-acea-bc9f6ffbd26b", 00:13:21.800 "is_configured": true, 00:13:21.800 "data_offset": 0, 00:13:21.800 "data_size": 65536 00:13:21.800 }, 00:13:21.800 { 00:13:21.800 "name": "BaseBdev3", 00:13:21.800 "uuid": "75489508-1920-5dca-9fec-08760cd16f5f", 00:13:21.800 "is_configured": true, 00:13:21.800 "data_offset": 0, 00:13:21.800 "data_size": 65536 00:13:21.800 }, 00:13:21.800 { 00:13:21.800 "name": "BaseBdev4", 00:13:21.800 "uuid": "cc746c2a-f11b-574e-a02c-c2d384fef02d", 00:13:21.800 "is_configured": true, 00:13:21.800 "data_offset": 0, 00:13:21.800 "data_size": 65536 00:13:21.800 } 00:13:21.800 ] 00:13:21.800 }' 00:13:21.800 03:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.800 03:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:21.800 03:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.800 03:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:21.800 03:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:21.800 03:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.800 03:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.800 [2024-11-21 03:22:09.315803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:21.800 [2024-11-21 03:22:09.320049] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0a250 00:13:21.800 03:22:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.800 03:22:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:21.800 [2024-11-21 03:22:09.322208] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.184 "name": "raid_bdev1", 00:13:23.184 "uuid": "48aa6a8a-69d0-4c8c-8b2b-47abca2c4511", 00:13:23.184 "strip_size_kb": 0, 00:13:23.184 "state": "online", 00:13:23.184 "raid_level": "raid1", 00:13:23.184 "superblock": false, 00:13:23.184 "num_base_bdevs": 4, 00:13:23.184 "num_base_bdevs_discovered": 4, 00:13:23.184 "num_base_bdevs_operational": 4, 00:13:23.184 "process": { 00:13:23.184 "type": "rebuild", 00:13:23.184 "target": "spare", 00:13:23.184 "progress": { 00:13:23.184 "blocks": 20480, 00:13:23.184 "percent": 31 00:13:23.184 } 00:13:23.184 }, 00:13:23.184 "base_bdevs_list": [ 00:13:23.184 { 00:13:23.184 "name": "spare", 00:13:23.184 "uuid": "00fb5b67-5a6c-572f-973b-2e25a6a90b8b", 00:13:23.184 "is_configured": true, 00:13:23.184 "data_offset": 0, 00:13:23.184 "data_size": 65536 00:13:23.184 }, 00:13:23.184 { 00:13:23.184 "name": "BaseBdev2", 00:13:23.184 "uuid": "eb0a72b9-d654-56ea-acea-bc9f6ffbd26b", 00:13:23.184 "is_configured": true, 00:13:23.184 "data_offset": 0, 00:13:23.184 "data_size": 65536 00:13:23.184 }, 00:13:23.184 { 00:13:23.184 "name": "BaseBdev3", 00:13:23.184 "uuid": "75489508-1920-5dca-9fec-08760cd16f5f", 00:13:23.184 "is_configured": true, 00:13:23.184 "data_offset": 0, 00:13:23.184 "data_size": 65536 00:13:23.184 }, 00:13:23.184 { 00:13:23.184 "name": "BaseBdev4", 00:13:23.184 "uuid": "cc746c2a-f11b-574e-a02c-c2d384fef02d", 00:13:23.184 "is_configured": true, 00:13:23.184 "data_offset": 0, 00:13:23.184 "data_size": 65536 00:13:23.184 } 00:13:23.184 ] 00:13:23.184 }' 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.184 [2024-11-21 03:22:10.464952] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:23.184 [2024-11-21 03:22:10.528797] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d0a250 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.184 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.184 "name": "raid_bdev1", 00:13:23.184 "uuid": "48aa6a8a-69d0-4c8c-8b2b-47abca2c4511", 00:13:23.184 "strip_size_kb": 0, 00:13:23.184 "state": "online", 00:13:23.184 "raid_level": "raid1", 00:13:23.184 "superblock": false, 00:13:23.184 "num_base_bdevs": 4, 00:13:23.184 "num_base_bdevs_discovered": 3, 00:13:23.184 "num_base_bdevs_operational": 3, 00:13:23.184 "process": { 00:13:23.184 "type": "rebuild", 00:13:23.184 "target": "spare", 00:13:23.184 "progress": { 00:13:23.184 "blocks": 24576, 00:13:23.184 "percent": 37 00:13:23.184 } 00:13:23.184 }, 00:13:23.184 "base_bdevs_list": [ 00:13:23.184 { 00:13:23.184 "name": "spare", 00:13:23.184 "uuid": "00fb5b67-5a6c-572f-973b-2e25a6a90b8b", 00:13:23.184 "is_configured": true, 00:13:23.185 "data_offset": 0, 00:13:23.185 "data_size": 65536 00:13:23.185 }, 00:13:23.185 { 00:13:23.185 "name": null, 00:13:23.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.185 "is_configured": false, 00:13:23.185 "data_offset": 0, 00:13:23.185 "data_size": 65536 00:13:23.185 }, 00:13:23.185 { 00:13:23.185 "name": "BaseBdev3", 00:13:23.185 "uuid": "75489508-1920-5dca-9fec-08760cd16f5f", 00:13:23.185 "is_configured": true, 00:13:23.185 "data_offset": 0, 00:13:23.185 "data_size": 65536 00:13:23.185 }, 00:13:23.185 { 00:13:23.185 "name": "BaseBdev4", 00:13:23.185 "uuid": "cc746c2a-f11b-574e-a02c-c2d384fef02d", 00:13:23.185 "is_configured": true, 00:13:23.185 "data_offset": 0, 00:13:23.185 "data_size": 65536 00:13:23.185 } 00:13:23.185 ] 00:13:23.185 }' 00:13:23.185 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.185 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:23.185 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.185 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:23.185 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=365 00:13:23.185 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:23.185 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:23.185 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.185 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:23.185 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:23.185 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.185 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.185 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.185 03:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.185 03:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.185 03:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.185 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.185 "name": "raid_bdev1", 00:13:23.185 "uuid": "48aa6a8a-69d0-4c8c-8b2b-47abca2c4511", 00:13:23.185 "strip_size_kb": 0, 00:13:23.185 "state": "online", 00:13:23.185 "raid_level": "raid1", 00:13:23.185 "superblock": false, 00:13:23.185 "num_base_bdevs": 4, 00:13:23.185 "num_base_bdevs_discovered": 3, 00:13:23.185 "num_base_bdevs_operational": 3, 00:13:23.185 "process": { 00:13:23.185 "type": "rebuild", 00:13:23.185 "target": "spare", 00:13:23.185 "progress": { 00:13:23.185 "blocks": 26624, 00:13:23.185 "percent": 40 00:13:23.185 } 00:13:23.185 }, 00:13:23.185 "base_bdevs_list": [ 00:13:23.185 { 00:13:23.185 "name": "spare", 00:13:23.185 "uuid": "00fb5b67-5a6c-572f-973b-2e25a6a90b8b", 00:13:23.185 "is_configured": true, 00:13:23.185 "data_offset": 0, 00:13:23.185 "data_size": 65536 00:13:23.185 }, 00:13:23.185 { 00:13:23.185 "name": null, 00:13:23.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.185 "is_configured": false, 00:13:23.185 "data_offset": 0, 00:13:23.185 "data_size": 65536 00:13:23.185 }, 00:13:23.185 { 00:13:23.185 "name": "BaseBdev3", 00:13:23.185 "uuid": "75489508-1920-5dca-9fec-08760cd16f5f", 00:13:23.185 "is_configured": true, 00:13:23.185 "data_offset": 0, 00:13:23.185 "data_size": 65536 00:13:23.185 }, 00:13:23.185 { 00:13:23.185 "name": "BaseBdev4", 00:13:23.185 "uuid": "cc746c2a-f11b-574e-a02c-c2d384fef02d", 00:13:23.185 "is_configured": true, 00:13:23.185 "data_offset": 0, 00:13:23.185 "data_size": 65536 00:13:23.185 } 00:13:23.185 ] 00:13:23.185 }' 00:13:23.185 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.185 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:23.185 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.444 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:23.444 03:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:24.384 03:22:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:24.385 03:22:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:24.385 03:22:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.385 03:22:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:24.385 03:22:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:24.385 03:22:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.385 03:22:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.385 03:22:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.385 03:22:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.385 03:22:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.385 03:22:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.385 03:22:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.385 "name": "raid_bdev1", 00:13:24.385 "uuid": "48aa6a8a-69d0-4c8c-8b2b-47abca2c4511", 00:13:24.385 "strip_size_kb": 0, 00:13:24.385 "state": "online", 00:13:24.385 "raid_level": "raid1", 00:13:24.385 "superblock": false, 00:13:24.385 "num_base_bdevs": 4, 00:13:24.385 "num_base_bdevs_discovered": 3, 00:13:24.385 "num_base_bdevs_operational": 3, 00:13:24.385 "process": { 00:13:24.385 "type": "rebuild", 00:13:24.385 "target": "spare", 00:13:24.385 "progress": { 00:13:24.385 "blocks": 49152, 00:13:24.385 "percent": 75 00:13:24.385 } 00:13:24.385 }, 00:13:24.385 "base_bdevs_list": [ 00:13:24.385 { 00:13:24.385 "name": "spare", 00:13:24.385 "uuid": "00fb5b67-5a6c-572f-973b-2e25a6a90b8b", 00:13:24.385 "is_configured": true, 00:13:24.385 "data_offset": 0, 00:13:24.385 "data_size": 65536 00:13:24.385 }, 00:13:24.385 { 00:13:24.385 "name": null, 00:13:24.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.385 "is_configured": false, 00:13:24.385 "data_offset": 0, 00:13:24.385 "data_size": 65536 00:13:24.385 }, 00:13:24.385 { 00:13:24.385 "name": "BaseBdev3", 00:13:24.385 "uuid": "75489508-1920-5dca-9fec-08760cd16f5f", 00:13:24.385 "is_configured": true, 00:13:24.385 "data_offset": 0, 00:13:24.385 "data_size": 65536 00:13:24.385 }, 00:13:24.385 { 00:13:24.385 "name": "BaseBdev4", 00:13:24.385 "uuid": "cc746c2a-f11b-574e-a02c-c2d384fef02d", 00:13:24.385 "is_configured": true, 00:13:24.385 "data_offset": 0, 00:13:24.385 "data_size": 65536 00:13:24.385 } 00:13:24.385 ] 00:13:24.385 }' 00:13:24.385 03:22:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.385 03:22:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:24.385 03:22:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.385 03:22:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:24.385 03:22:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:25.324 [2024-11-21 03:22:12.540329] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:25.324 [2024-11-21 03:22:12.540458] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:25.324 [2024-11-21 03:22:12.540505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.585 03:22:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:25.585 03:22:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.585 03:22:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.585 03:22:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.585 03:22:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.585 03:22:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.585 03:22:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.585 03:22:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.585 03:22:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.585 03:22:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.585 03:22:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.585 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.585 "name": "raid_bdev1", 00:13:25.585 "uuid": "48aa6a8a-69d0-4c8c-8b2b-47abca2c4511", 00:13:25.585 "strip_size_kb": 0, 00:13:25.585 "state": "online", 00:13:25.585 "raid_level": "raid1", 00:13:25.585 "superblock": false, 00:13:25.585 "num_base_bdevs": 4, 00:13:25.585 "num_base_bdevs_discovered": 3, 00:13:25.585 "num_base_bdevs_operational": 3, 00:13:25.585 "base_bdevs_list": [ 00:13:25.585 { 00:13:25.585 "name": "spare", 00:13:25.585 "uuid": "00fb5b67-5a6c-572f-973b-2e25a6a90b8b", 00:13:25.585 "is_configured": true, 00:13:25.585 "data_offset": 0, 00:13:25.585 "data_size": 65536 00:13:25.585 }, 00:13:25.585 { 00:13:25.585 "name": null, 00:13:25.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.585 "is_configured": false, 00:13:25.585 "data_offset": 0, 00:13:25.585 "data_size": 65536 00:13:25.585 }, 00:13:25.585 { 00:13:25.585 "name": "BaseBdev3", 00:13:25.585 "uuid": "75489508-1920-5dca-9fec-08760cd16f5f", 00:13:25.585 "is_configured": true, 00:13:25.585 "data_offset": 0, 00:13:25.585 "data_size": 65536 00:13:25.585 }, 00:13:25.585 { 00:13:25.585 "name": "BaseBdev4", 00:13:25.585 "uuid": "cc746c2a-f11b-574e-a02c-c2d384fef02d", 00:13:25.585 "is_configured": true, 00:13:25.585 "data_offset": 0, 00:13:25.585 "data_size": 65536 00:13:25.585 } 00:13:25.585 ] 00:13:25.585 }' 00:13:25.585 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.585 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:25.585 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.585 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:25.585 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:25.585 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:25.585 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.585 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:25.585 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:25.585 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.585 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.585 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.585 03:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.585 03:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.585 03:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.585 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.585 "name": "raid_bdev1", 00:13:25.585 "uuid": "48aa6a8a-69d0-4c8c-8b2b-47abca2c4511", 00:13:25.585 "strip_size_kb": 0, 00:13:25.585 "state": "online", 00:13:25.585 "raid_level": "raid1", 00:13:25.585 "superblock": false, 00:13:25.585 "num_base_bdevs": 4, 00:13:25.585 "num_base_bdevs_discovered": 3, 00:13:25.585 "num_base_bdevs_operational": 3, 00:13:25.585 "base_bdevs_list": [ 00:13:25.585 { 00:13:25.585 "name": "spare", 00:13:25.585 "uuid": "00fb5b67-5a6c-572f-973b-2e25a6a90b8b", 00:13:25.585 "is_configured": true, 00:13:25.585 "data_offset": 0, 00:13:25.585 "data_size": 65536 00:13:25.585 }, 00:13:25.585 { 00:13:25.585 "name": null, 00:13:25.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.585 "is_configured": false, 00:13:25.585 "data_offset": 0, 00:13:25.585 "data_size": 65536 00:13:25.585 }, 00:13:25.585 { 00:13:25.585 "name": "BaseBdev3", 00:13:25.585 "uuid": "75489508-1920-5dca-9fec-08760cd16f5f", 00:13:25.585 "is_configured": true, 00:13:25.585 "data_offset": 0, 00:13:25.585 "data_size": 65536 00:13:25.585 }, 00:13:25.585 { 00:13:25.585 "name": "BaseBdev4", 00:13:25.585 "uuid": "cc746c2a-f11b-574e-a02c-c2d384fef02d", 00:13:25.585 "is_configured": true, 00:13:25.585 "data_offset": 0, 00:13:25.585 "data_size": 65536 00:13:25.585 } 00:13:25.585 ] 00:13:25.585 }' 00:13:25.585 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.845 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:25.845 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.845 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:25.845 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:25.845 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.845 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.845 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.845 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.845 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:25.845 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.845 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.845 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.845 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.845 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.845 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.845 03:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.845 03:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.845 03:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.845 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.845 "name": "raid_bdev1", 00:13:25.845 "uuid": "48aa6a8a-69d0-4c8c-8b2b-47abca2c4511", 00:13:25.845 "strip_size_kb": 0, 00:13:25.845 "state": "online", 00:13:25.845 "raid_level": "raid1", 00:13:25.846 "superblock": false, 00:13:25.846 "num_base_bdevs": 4, 00:13:25.846 "num_base_bdevs_discovered": 3, 00:13:25.846 "num_base_bdevs_operational": 3, 00:13:25.846 "base_bdevs_list": [ 00:13:25.846 { 00:13:25.846 "name": "spare", 00:13:25.846 "uuid": "00fb5b67-5a6c-572f-973b-2e25a6a90b8b", 00:13:25.846 "is_configured": true, 00:13:25.846 "data_offset": 0, 00:13:25.846 "data_size": 65536 00:13:25.846 }, 00:13:25.846 { 00:13:25.846 "name": null, 00:13:25.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.846 "is_configured": false, 00:13:25.846 "data_offset": 0, 00:13:25.846 "data_size": 65536 00:13:25.846 }, 00:13:25.846 { 00:13:25.846 "name": "BaseBdev3", 00:13:25.846 "uuid": "75489508-1920-5dca-9fec-08760cd16f5f", 00:13:25.846 "is_configured": true, 00:13:25.846 "data_offset": 0, 00:13:25.846 "data_size": 65536 00:13:25.846 }, 00:13:25.846 { 00:13:25.846 "name": "BaseBdev4", 00:13:25.846 "uuid": "cc746c2a-f11b-574e-a02c-c2d384fef02d", 00:13:25.846 "is_configured": true, 00:13:25.846 "data_offset": 0, 00:13:25.846 "data_size": 65536 00:13:25.846 } 00:13:25.846 ] 00:13:25.846 }' 00:13:25.846 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.846 03:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.105 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:26.105 03:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.105 03:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.105 [2024-11-21 03:22:13.593228] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:26.105 [2024-11-21 03:22:13.593270] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:26.105 [2024-11-21 03:22:13.593368] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:26.105 [2024-11-21 03:22:13.593469] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:26.105 [2024-11-21 03:22:13.593480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:13:26.105 03:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.105 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.105 03:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.105 03:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.105 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:26.105 03:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.105 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:26.105 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:26.105 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:26.105 03:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:26.105 03:22:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:26.105 03:22:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:26.105 03:22:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:26.105 03:22:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:26.105 03:22:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:26.105 03:22:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:26.105 03:22:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:26.105 03:22:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:26.105 03:22:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:26.365 /dev/nbd0 00:13:26.365 03:22:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:26.365 03:22:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:26.365 03:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:26.365 03:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:26.365 03:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:26.365 03:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:26.365 03:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:26.365 03:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:26.365 03:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:26.365 03:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:26.365 03:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:26.365 1+0 records in 00:13:26.365 1+0 records out 00:13:26.365 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328584 s, 12.5 MB/s 00:13:26.365 03:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:26.365 03:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:26.365 03:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:26.365 03:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:26.365 03:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:26.365 03:22:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:26.365 03:22:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:26.365 03:22:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:26.625 /dev/nbd1 00:13:26.625 03:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:26.625 03:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:26.625 03:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:26.625 03:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:26.625 03:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:26.625 03:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:26.625 03:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:26.625 03:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:26.625 03:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:26.625 03:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:26.625 03:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:26.625 1+0 records in 00:13:26.625 1+0 records out 00:13:26.625 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394023 s, 10.4 MB/s 00:13:26.625 03:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:26.625 03:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:26.625 03:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:26.625 03:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:26.625 03:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:26.625 03:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:26.625 03:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:26.625 03:22:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:26.885 03:22:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:26.885 03:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:26.885 03:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:26.885 03:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:26.885 03:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:26.885 03:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:26.885 03:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:27.145 03:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:27.145 03:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:27.145 03:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:27.145 03:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:27.145 03:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:27.145 03:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:27.145 03:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:27.145 03:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:27.145 03:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:27.145 03:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:27.145 03:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:27.145 03:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:27.145 03:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:27.145 03:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:27.145 03:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:27.145 03:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:27.145 03:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:27.145 03:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:27.145 03:22:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:27.145 03:22:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 90235 00:13:27.145 03:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 90235 ']' 00:13:27.145 03:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 90235 00:13:27.145 03:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:27.145 03:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:27.145 03:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90235 00:13:27.405 03:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:27.405 03:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:27.405 killing process with pid 90235 00:13:27.405 Received shutdown signal, test time was about 60.000000 seconds 00:13:27.405 00:13:27.405 Latency(us) 00:13:27.405 [2024-11-21T03:22:14.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.405 [2024-11-21T03:22:14.971Z] =================================================================================================================== 00:13:27.405 [2024-11-21T03:22:14.971Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:27.405 03:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90235' 00:13:27.405 03:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 90235 00:13:27.405 [2024-11-21 03:22:14.739948] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:27.405 03:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 90235 00:13:27.405 [2024-11-21 03:22:14.791419] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:27.665 03:22:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:27.665 00:13:27.665 real 0m15.893s 00:13:27.665 user 0m17.408s 00:13:27.665 sys 0m3.291s 00:13:27.665 03:22:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.665 ************************************ 00:13:27.665 END TEST raid_rebuild_test 00:13:27.665 ************************************ 00:13:27.665 03:22:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.665 03:22:15 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:13:27.665 03:22:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:27.665 03:22:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.666 03:22:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:27.666 ************************************ 00:13:27.666 START TEST raid_rebuild_test_sb 00:13:27.666 ************************************ 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=90659 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 90659 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 90659 ']' 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:27.666 03:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.666 [2024-11-21 03:22:15.185926] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:13:27.666 [2024-11-21 03:22:15.186179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:27.666 Zero copy mechanism will not be used. 00:13:27.666 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90659 ] 00:13:27.926 [2024-11-21 03:22:15.327336] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:27.926 [2024-11-21 03:22:15.364726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.926 [2024-11-21 03:22:15.394875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.926 [2024-11-21 03:22:15.438321] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:27.926 [2024-11-21 03:22:15.438371] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:28.495 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:28.495 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:28.495 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:28.495 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:28.495 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.495 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.495 BaseBdev1_malloc 00:13:28.495 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.495 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:28.495 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.495 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.495 [2024-11-21 03:22:16.034540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:28.495 [2024-11-21 03:22:16.034619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.495 [2024-11-21 03:22:16.034654] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:28.495 [2024-11-21 03:22:16.034671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.495 [2024-11-21 03:22:16.037008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.495 [2024-11-21 03:22:16.037130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:28.495 BaseBdev1 00:13:28.495 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.495 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:28.495 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:28.495 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.495 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.495 BaseBdev2_malloc 00:13:28.495 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.495 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:28.495 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.495 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.756 [2024-11-21 03:22:16.059476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:28.757 [2024-11-21 03:22:16.059553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.757 [2024-11-21 03:22:16.059576] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:28.757 [2024-11-21 03:22:16.059588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.757 [2024-11-21 03:22:16.061994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.757 [2024-11-21 03:22:16.062048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:28.757 BaseBdev2 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.757 BaseBdev3_malloc 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.757 [2024-11-21 03:22:16.088500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:28.757 [2024-11-21 03:22:16.088568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.757 [2024-11-21 03:22:16.088607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:28.757 [2024-11-21 03:22:16.088619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.757 [2024-11-21 03:22:16.090939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.757 [2024-11-21 03:22:16.090985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:28.757 BaseBdev3 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.757 BaseBdev4_malloc 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.757 [2024-11-21 03:22:16.126585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:28.757 [2024-11-21 03:22:16.126752] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.757 [2024-11-21 03:22:16.126791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:28.757 [2024-11-21 03:22:16.126825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.757 [2024-11-21 03:22:16.129144] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.757 [2024-11-21 03:22:16.129223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:28.757 BaseBdev4 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.757 spare_malloc 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.757 spare_delay 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.757 [2024-11-21 03:22:16.167539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:28.757 [2024-11-21 03:22:16.167617] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.757 [2024-11-21 03:22:16.167643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:28.757 [2024-11-21 03:22:16.167656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.757 [2024-11-21 03:22:16.169891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.757 [2024-11-21 03:22:16.170007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:28.757 spare 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.757 [2024-11-21 03:22:16.179633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:28.757 [2024-11-21 03:22:16.181662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:28.757 [2024-11-21 03:22:16.181729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:28.757 [2024-11-21 03:22:16.181776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:28.757 [2024-11-21 03:22:16.181944] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:13:28.757 [2024-11-21 03:22:16.181959] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:28.757 [2024-11-21 03:22:16.182251] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:28.757 [2024-11-21 03:22:16.182420] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:13:28.757 [2024-11-21 03:22:16.182441] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:13:28.757 [2024-11-21 03:22:16.182600] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.757 "name": "raid_bdev1", 00:13:28.757 "uuid": "a811aa47-a636-4deb-8316-6a2785af2e3e", 00:13:28.757 "strip_size_kb": 0, 00:13:28.757 "state": "online", 00:13:28.757 "raid_level": "raid1", 00:13:28.757 "superblock": true, 00:13:28.757 "num_base_bdevs": 4, 00:13:28.757 "num_base_bdevs_discovered": 4, 00:13:28.757 "num_base_bdevs_operational": 4, 00:13:28.757 "base_bdevs_list": [ 00:13:28.757 { 00:13:28.757 "name": "BaseBdev1", 00:13:28.757 "uuid": "949acfc3-6825-5d9a-89a3-850ea4a96973", 00:13:28.757 "is_configured": true, 00:13:28.757 "data_offset": 2048, 00:13:28.757 "data_size": 63488 00:13:28.757 }, 00:13:28.757 { 00:13:28.757 "name": "BaseBdev2", 00:13:28.757 "uuid": "c0009697-7499-5623-89c4-28705740b52b", 00:13:28.757 "is_configured": true, 00:13:28.757 "data_offset": 2048, 00:13:28.757 "data_size": 63488 00:13:28.757 }, 00:13:28.757 { 00:13:28.757 "name": "BaseBdev3", 00:13:28.757 "uuid": "9f349b06-13c0-53cd-9348-f6e154f25920", 00:13:28.757 "is_configured": true, 00:13:28.757 "data_offset": 2048, 00:13:28.757 "data_size": 63488 00:13:28.757 }, 00:13:28.757 { 00:13:28.757 "name": "BaseBdev4", 00:13:28.757 "uuid": "271091ab-1131-5cd2-804f-06707a3d22b2", 00:13:28.757 "is_configured": true, 00:13:28.757 "data_offset": 2048, 00:13:28.757 "data_size": 63488 00:13:28.757 } 00:13:28.757 ] 00:13:28.757 }' 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.757 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.327 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:29.327 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:29.327 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.327 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.327 [2024-11-21 03:22:16.668093] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:29.327 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.327 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:29.327 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.327 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.327 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.327 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:29.327 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.327 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:29.327 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:29.327 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:29.327 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:29.327 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:29.327 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:29.327 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:29.327 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:29.327 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:29.327 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:29.327 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:29.327 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:29.327 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:29.327 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:29.587 [2024-11-21 03:22:16.927904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:29.587 /dev/nbd0 00:13:29.587 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:29.587 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:29.587 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:29.587 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:29.587 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:29.587 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:29.587 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:29.587 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:29.587 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:29.587 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:29.587 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:29.587 1+0 records in 00:13:29.587 1+0 records out 00:13:29.588 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387019 s, 10.6 MB/s 00:13:29.588 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.588 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:29.588 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.588 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:29.588 03:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:29.588 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:29.588 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:29.588 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:29.588 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:29.588 03:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:34.866 63488+0 records in 00:13:34.866 63488+0 records out 00:13:34.866 32505856 bytes (33 MB, 31 MiB) copied, 5.22206 s, 6.2 MB/s 00:13:34.866 03:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:34.866 03:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:34.866 03:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:34.866 03:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:34.866 03:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:34.866 03:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:34.866 03:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:34.866 03:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:34.866 [2024-11-21 03:22:22.418168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.866 03:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:34.866 03:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:34.866 03:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:34.866 03:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:34.866 03:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:34.866 03:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:34.866 03:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:34.866 03:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:34.866 03:22:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.866 03:22:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.126 [2024-11-21 03:22:22.434781] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:35.126 03:22:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.126 03:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:35.126 03:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.126 03:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.126 03:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.126 03:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.126 03:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:35.126 03:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.126 03:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.126 03:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.126 03:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.126 03:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.126 03:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.126 03:22:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.126 03:22:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.126 03:22:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.126 03:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.126 "name": "raid_bdev1", 00:13:35.126 "uuid": "a811aa47-a636-4deb-8316-6a2785af2e3e", 00:13:35.126 "strip_size_kb": 0, 00:13:35.126 "state": "online", 00:13:35.126 "raid_level": "raid1", 00:13:35.126 "superblock": true, 00:13:35.126 "num_base_bdevs": 4, 00:13:35.126 "num_base_bdevs_discovered": 3, 00:13:35.126 "num_base_bdevs_operational": 3, 00:13:35.126 "base_bdevs_list": [ 00:13:35.126 { 00:13:35.126 "name": null, 00:13:35.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.126 "is_configured": false, 00:13:35.126 "data_offset": 0, 00:13:35.126 "data_size": 63488 00:13:35.126 }, 00:13:35.126 { 00:13:35.126 "name": "BaseBdev2", 00:13:35.126 "uuid": "c0009697-7499-5623-89c4-28705740b52b", 00:13:35.126 "is_configured": true, 00:13:35.126 "data_offset": 2048, 00:13:35.126 "data_size": 63488 00:13:35.126 }, 00:13:35.126 { 00:13:35.126 "name": "BaseBdev3", 00:13:35.126 "uuid": "9f349b06-13c0-53cd-9348-f6e154f25920", 00:13:35.126 "is_configured": true, 00:13:35.126 "data_offset": 2048, 00:13:35.126 "data_size": 63488 00:13:35.126 }, 00:13:35.126 { 00:13:35.126 "name": "BaseBdev4", 00:13:35.126 "uuid": "271091ab-1131-5cd2-804f-06707a3d22b2", 00:13:35.126 "is_configured": true, 00:13:35.126 "data_offset": 2048, 00:13:35.126 "data_size": 63488 00:13:35.126 } 00:13:35.126 ] 00:13:35.126 }' 00:13:35.126 03:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.126 03:22:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.386 03:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:35.386 03:22:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.386 03:22:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.386 [2024-11-21 03:22:22.878937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:35.386 [2024-11-21 03:22:22.883278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3910 00:13:35.386 03:22:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.386 03:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:35.386 [2024-11-21 03:22:22.885416] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:36.774 03:22:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.774 03:22:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.774 03:22:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.774 03:22:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.774 03:22:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.774 03:22:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.774 03:22:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.774 03:22:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.774 03:22:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.774 03:22:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.774 03:22:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.774 "name": "raid_bdev1", 00:13:36.774 "uuid": "a811aa47-a636-4deb-8316-6a2785af2e3e", 00:13:36.774 "strip_size_kb": 0, 00:13:36.774 "state": "online", 00:13:36.774 "raid_level": "raid1", 00:13:36.774 "superblock": true, 00:13:36.774 "num_base_bdevs": 4, 00:13:36.774 "num_base_bdevs_discovered": 4, 00:13:36.774 "num_base_bdevs_operational": 4, 00:13:36.774 "process": { 00:13:36.774 "type": "rebuild", 00:13:36.774 "target": "spare", 00:13:36.774 "progress": { 00:13:36.774 "blocks": 20480, 00:13:36.774 "percent": 32 00:13:36.774 } 00:13:36.774 }, 00:13:36.774 "base_bdevs_list": [ 00:13:36.774 { 00:13:36.774 "name": "spare", 00:13:36.774 "uuid": "f40103be-02a4-55dd-934f-59923f838e19", 00:13:36.774 "is_configured": true, 00:13:36.774 "data_offset": 2048, 00:13:36.774 "data_size": 63488 00:13:36.774 }, 00:13:36.774 { 00:13:36.774 "name": "BaseBdev2", 00:13:36.774 "uuid": "c0009697-7499-5623-89c4-28705740b52b", 00:13:36.774 "is_configured": true, 00:13:36.774 "data_offset": 2048, 00:13:36.774 "data_size": 63488 00:13:36.774 }, 00:13:36.774 { 00:13:36.774 "name": "BaseBdev3", 00:13:36.774 "uuid": "9f349b06-13c0-53cd-9348-f6e154f25920", 00:13:36.774 "is_configured": true, 00:13:36.774 "data_offset": 2048, 00:13:36.774 "data_size": 63488 00:13:36.774 }, 00:13:36.774 { 00:13:36.774 "name": "BaseBdev4", 00:13:36.774 "uuid": "271091ab-1131-5cd2-804f-06707a3d22b2", 00:13:36.774 "is_configured": true, 00:13:36.774 "data_offset": 2048, 00:13:36.774 "data_size": 63488 00:13:36.774 } 00:13:36.774 ] 00:13:36.774 }' 00:13:36.774 03:22:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.774 03:22:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:36.774 03:22:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.774 03:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:36.774 03:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:36.774 03:22:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.774 03:22:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.774 [2024-11-21 03:22:24.028239] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:36.774 [2024-11-21 03:22:24.092738] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:36.774 [2024-11-21 03:22:24.092821] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.774 [2024-11-21 03:22:24.092839] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:36.774 [2024-11-21 03:22:24.092851] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:36.774 03:22:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.774 03:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:36.774 03:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.774 03:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.774 03:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.774 03:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.774 03:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:36.774 03:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.774 03:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.774 03:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.774 03:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.774 03:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.774 03:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.774 03:22:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.774 03:22:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.774 03:22:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.774 03:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.774 "name": "raid_bdev1", 00:13:36.774 "uuid": "a811aa47-a636-4deb-8316-6a2785af2e3e", 00:13:36.774 "strip_size_kb": 0, 00:13:36.774 "state": "online", 00:13:36.775 "raid_level": "raid1", 00:13:36.775 "superblock": true, 00:13:36.775 "num_base_bdevs": 4, 00:13:36.775 "num_base_bdevs_discovered": 3, 00:13:36.775 "num_base_bdevs_operational": 3, 00:13:36.775 "base_bdevs_list": [ 00:13:36.775 { 00:13:36.775 "name": null, 00:13:36.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.775 "is_configured": false, 00:13:36.775 "data_offset": 0, 00:13:36.775 "data_size": 63488 00:13:36.775 }, 00:13:36.775 { 00:13:36.775 "name": "BaseBdev2", 00:13:36.775 "uuid": "c0009697-7499-5623-89c4-28705740b52b", 00:13:36.775 "is_configured": true, 00:13:36.775 "data_offset": 2048, 00:13:36.775 "data_size": 63488 00:13:36.775 }, 00:13:36.775 { 00:13:36.775 "name": "BaseBdev3", 00:13:36.775 "uuid": "9f349b06-13c0-53cd-9348-f6e154f25920", 00:13:36.775 "is_configured": true, 00:13:36.775 "data_offset": 2048, 00:13:36.775 "data_size": 63488 00:13:36.775 }, 00:13:36.775 { 00:13:36.775 "name": "BaseBdev4", 00:13:36.775 "uuid": "271091ab-1131-5cd2-804f-06707a3d22b2", 00:13:36.775 "is_configured": true, 00:13:36.775 "data_offset": 2048, 00:13:36.775 "data_size": 63488 00:13:36.775 } 00:13:36.775 ] 00:13:36.775 }' 00:13:36.775 03:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.775 03:22:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.056 03:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:37.056 03:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.056 03:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:37.056 03:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:37.056 03:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.056 03:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.056 03:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.056 03:22:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.056 03:22:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.056 03:22:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.056 03:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.056 "name": "raid_bdev1", 00:13:37.056 "uuid": "a811aa47-a636-4deb-8316-6a2785af2e3e", 00:13:37.056 "strip_size_kb": 0, 00:13:37.056 "state": "online", 00:13:37.056 "raid_level": "raid1", 00:13:37.056 "superblock": true, 00:13:37.056 "num_base_bdevs": 4, 00:13:37.056 "num_base_bdevs_discovered": 3, 00:13:37.056 "num_base_bdevs_operational": 3, 00:13:37.056 "base_bdevs_list": [ 00:13:37.056 { 00:13:37.056 "name": null, 00:13:37.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.056 "is_configured": false, 00:13:37.056 "data_offset": 0, 00:13:37.056 "data_size": 63488 00:13:37.056 }, 00:13:37.056 { 00:13:37.056 "name": "BaseBdev2", 00:13:37.056 "uuid": "c0009697-7499-5623-89c4-28705740b52b", 00:13:37.056 "is_configured": true, 00:13:37.056 "data_offset": 2048, 00:13:37.056 "data_size": 63488 00:13:37.056 }, 00:13:37.056 { 00:13:37.056 "name": "BaseBdev3", 00:13:37.056 "uuid": "9f349b06-13c0-53cd-9348-f6e154f25920", 00:13:37.056 "is_configured": true, 00:13:37.056 "data_offset": 2048, 00:13:37.056 "data_size": 63488 00:13:37.056 }, 00:13:37.056 { 00:13:37.056 "name": "BaseBdev4", 00:13:37.056 "uuid": "271091ab-1131-5cd2-804f-06707a3d22b2", 00:13:37.056 "is_configured": true, 00:13:37.056 "data_offset": 2048, 00:13:37.056 "data_size": 63488 00:13:37.056 } 00:13:37.056 ] 00:13:37.056 }' 00:13:37.056 03:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.331 03:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:37.331 03:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.331 03:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:37.331 03:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:37.331 03:22:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.331 03:22:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.331 [2024-11-21 03:22:24.681777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:37.331 [2024-11-21 03:22:24.686110] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca39e0 00:13:37.331 03:22:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.331 03:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:37.331 [2024-11-21 03:22:24.688347] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:38.270 03:22:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:38.270 03:22:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.270 03:22:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:38.270 03:22:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:38.270 03:22:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.270 03:22:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.270 03:22:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.270 03:22:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.270 03:22:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.270 03:22:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.270 03:22:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.270 "name": "raid_bdev1", 00:13:38.270 "uuid": "a811aa47-a636-4deb-8316-6a2785af2e3e", 00:13:38.270 "strip_size_kb": 0, 00:13:38.270 "state": "online", 00:13:38.270 "raid_level": "raid1", 00:13:38.270 "superblock": true, 00:13:38.270 "num_base_bdevs": 4, 00:13:38.270 "num_base_bdevs_discovered": 4, 00:13:38.270 "num_base_bdevs_operational": 4, 00:13:38.270 "process": { 00:13:38.270 "type": "rebuild", 00:13:38.270 "target": "spare", 00:13:38.270 "progress": { 00:13:38.270 "blocks": 20480, 00:13:38.270 "percent": 32 00:13:38.270 } 00:13:38.270 }, 00:13:38.270 "base_bdevs_list": [ 00:13:38.270 { 00:13:38.270 "name": "spare", 00:13:38.270 "uuid": "f40103be-02a4-55dd-934f-59923f838e19", 00:13:38.270 "is_configured": true, 00:13:38.270 "data_offset": 2048, 00:13:38.270 "data_size": 63488 00:13:38.270 }, 00:13:38.270 { 00:13:38.270 "name": "BaseBdev2", 00:13:38.270 "uuid": "c0009697-7499-5623-89c4-28705740b52b", 00:13:38.270 "is_configured": true, 00:13:38.270 "data_offset": 2048, 00:13:38.270 "data_size": 63488 00:13:38.270 }, 00:13:38.270 { 00:13:38.270 "name": "BaseBdev3", 00:13:38.270 "uuid": "9f349b06-13c0-53cd-9348-f6e154f25920", 00:13:38.270 "is_configured": true, 00:13:38.270 "data_offset": 2048, 00:13:38.270 "data_size": 63488 00:13:38.270 }, 00:13:38.270 { 00:13:38.270 "name": "BaseBdev4", 00:13:38.270 "uuid": "271091ab-1131-5cd2-804f-06707a3d22b2", 00:13:38.270 "is_configured": true, 00:13:38.270 "data_offset": 2048, 00:13:38.270 "data_size": 63488 00:13:38.270 } 00:13:38.270 ] 00:13:38.270 }' 00:13:38.270 03:22:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.270 03:22:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:38.270 03:22:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.270 03:22:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:38.270 03:22:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:38.270 03:22:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:38.270 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:38.270 03:22:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:38.270 03:22:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:38.270 03:22:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:38.270 03:22:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:38.270 03:22:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.270 03:22:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.531 [2024-11-21 03:22:25.839094] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:38.531 [2024-11-21 03:22:25.994925] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca39e0 00:13:38.531 03:22:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.531 03:22:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:38.531 03:22:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:38.531 03:22:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:38.531 03:22:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.531 03:22:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:38.531 03:22:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:38.531 03:22:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.531 03:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.531 03:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.531 03:22:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.531 03:22:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.531 03:22:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.531 03:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.531 "name": "raid_bdev1", 00:13:38.531 "uuid": "a811aa47-a636-4deb-8316-6a2785af2e3e", 00:13:38.531 "strip_size_kb": 0, 00:13:38.531 "state": "online", 00:13:38.531 "raid_level": "raid1", 00:13:38.531 "superblock": true, 00:13:38.531 "num_base_bdevs": 4, 00:13:38.531 "num_base_bdevs_discovered": 3, 00:13:38.531 "num_base_bdevs_operational": 3, 00:13:38.531 "process": { 00:13:38.531 "type": "rebuild", 00:13:38.531 "target": "spare", 00:13:38.531 "progress": { 00:13:38.531 "blocks": 24576, 00:13:38.531 "percent": 38 00:13:38.531 } 00:13:38.531 }, 00:13:38.531 "base_bdevs_list": [ 00:13:38.531 { 00:13:38.531 "name": "spare", 00:13:38.531 "uuid": "f40103be-02a4-55dd-934f-59923f838e19", 00:13:38.531 "is_configured": true, 00:13:38.531 "data_offset": 2048, 00:13:38.531 "data_size": 63488 00:13:38.531 }, 00:13:38.531 { 00:13:38.531 "name": null, 00:13:38.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.531 "is_configured": false, 00:13:38.531 "data_offset": 0, 00:13:38.531 "data_size": 63488 00:13:38.531 }, 00:13:38.531 { 00:13:38.531 "name": "BaseBdev3", 00:13:38.531 "uuid": "9f349b06-13c0-53cd-9348-f6e154f25920", 00:13:38.531 "is_configured": true, 00:13:38.531 "data_offset": 2048, 00:13:38.531 "data_size": 63488 00:13:38.531 }, 00:13:38.531 { 00:13:38.531 "name": "BaseBdev4", 00:13:38.531 "uuid": "271091ab-1131-5cd2-804f-06707a3d22b2", 00:13:38.531 "is_configured": true, 00:13:38.531 "data_offset": 2048, 00:13:38.531 "data_size": 63488 00:13:38.531 } 00:13:38.531 ] 00:13:38.531 }' 00:13:38.531 03:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.791 03:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:38.791 03:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.791 03:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:38.791 03:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=381 00:13:38.791 03:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:38.791 03:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:38.791 03:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.791 03:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:38.791 03:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:38.791 03:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.791 03:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.791 03:22:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.791 03:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.791 03:22:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.791 03:22:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.791 03:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.791 "name": "raid_bdev1", 00:13:38.791 "uuid": "a811aa47-a636-4deb-8316-6a2785af2e3e", 00:13:38.791 "strip_size_kb": 0, 00:13:38.791 "state": "online", 00:13:38.791 "raid_level": "raid1", 00:13:38.791 "superblock": true, 00:13:38.791 "num_base_bdevs": 4, 00:13:38.791 "num_base_bdevs_discovered": 3, 00:13:38.791 "num_base_bdevs_operational": 3, 00:13:38.791 "process": { 00:13:38.791 "type": "rebuild", 00:13:38.791 "target": "spare", 00:13:38.791 "progress": { 00:13:38.791 "blocks": 26624, 00:13:38.791 "percent": 41 00:13:38.791 } 00:13:38.791 }, 00:13:38.791 "base_bdevs_list": [ 00:13:38.791 { 00:13:38.791 "name": "spare", 00:13:38.791 "uuid": "f40103be-02a4-55dd-934f-59923f838e19", 00:13:38.791 "is_configured": true, 00:13:38.791 "data_offset": 2048, 00:13:38.791 "data_size": 63488 00:13:38.791 }, 00:13:38.791 { 00:13:38.791 "name": null, 00:13:38.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.791 "is_configured": false, 00:13:38.791 "data_offset": 0, 00:13:38.791 "data_size": 63488 00:13:38.791 }, 00:13:38.791 { 00:13:38.791 "name": "BaseBdev3", 00:13:38.791 "uuid": "9f349b06-13c0-53cd-9348-f6e154f25920", 00:13:38.792 "is_configured": true, 00:13:38.792 "data_offset": 2048, 00:13:38.792 "data_size": 63488 00:13:38.792 }, 00:13:38.792 { 00:13:38.792 "name": "BaseBdev4", 00:13:38.792 "uuid": "271091ab-1131-5cd2-804f-06707a3d22b2", 00:13:38.792 "is_configured": true, 00:13:38.792 "data_offset": 2048, 00:13:38.792 "data_size": 63488 00:13:38.792 } 00:13:38.792 ] 00:13:38.792 }' 00:13:38.792 03:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.792 03:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:38.792 03:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.792 03:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:38.792 03:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:39.730 03:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:39.730 03:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.730 03:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.730 03:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.730 03:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.730 03:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.730 03:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.730 03:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.730 03:22:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.730 03:22:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.990 03:22:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.990 03:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.990 "name": "raid_bdev1", 00:13:39.990 "uuid": "a811aa47-a636-4deb-8316-6a2785af2e3e", 00:13:39.990 "strip_size_kb": 0, 00:13:39.990 "state": "online", 00:13:39.990 "raid_level": "raid1", 00:13:39.990 "superblock": true, 00:13:39.990 "num_base_bdevs": 4, 00:13:39.990 "num_base_bdevs_discovered": 3, 00:13:39.990 "num_base_bdevs_operational": 3, 00:13:39.990 "process": { 00:13:39.990 "type": "rebuild", 00:13:39.990 "target": "spare", 00:13:39.990 "progress": { 00:13:39.990 "blocks": 49152, 00:13:39.990 "percent": 77 00:13:39.990 } 00:13:39.990 }, 00:13:39.990 "base_bdevs_list": [ 00:13:39.990 { 00:13:39.990 "name": "spare", 00:13:39.990 "uuid": "f40103be-02a4-55dd-934f-59923f838e19", 00:13:39.990 "is_configured": true, 00:13:39.990 "data_offset": 2048, 00:13:39.990 "data_size": 63488 00:13:39.990 }, 00:13:39.990 { 00:13:39.990 "name": null, 00:13:39.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.990 "is_configured": false, 00:13:39.990 "data_offset": 0, 00:13:39.990 "data_size": 63488 00:13:39.990 }, 00:13:39.990 { 00:13:39.990 "name": "BaseBdev3", 00:13:39.990 "uuid": "9f349b06-13c0-53cd-9348-f6e154f25920", 00:13:39.990 "is_configured": true, 00:13:39.990 "data_offset": 2048, 00:13:39.990 "data_size": 63488 00:13:39.990 }, 00:13:39.990 { 00:13:39.990 "name": "BaseBdev4", 00:13:39.990 "uuid": "271091ab-1131-5cd2-804f-06707a3d22b2", 00:13:39.990 "is_configured": true, 00:13:39.990 "data_offset": 2048, 00:13:39.990 "data_size": 63488 00:13:39.990 } 00:13:39.990 ] 00:13:39.990 }' 00:13:39.990 03:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.990 03:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.990 03:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.990 03:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.990 03:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:40.560 [2024-11-21 03:22:27.905685] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:40.560 [2024-11-21 03:22:27.905851] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:40.560 [2024-11-21 03:22:27.905972] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.130 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:41.130 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.130 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.130 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.130 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.130 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.130 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.130 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.130 03:22:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.130 03:22:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.130 03:22:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.130 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.130 "name": "raid_bdev1", 00:13:41.130 "uuid": "a811aa47-a636-4deb-8316-6a2785af2e3e", 00:13:41.130 "strip_size_kb": 0, 00:13:41.130 "state": "online", 00:13:41.130 "raid_level": "raid1", 00:13:41.130 "superblock": true, 00:13:41.130 "num_base_bdevs": 4, 00:13:41.130 "num_base_bdevs_discovered": 3, 00:13:41.130 "num_base_bdevs_operational": 3, 00:13:41.130 "base_bdevs_list": [ 00:13:41.130 { 00:13:41.130 "name": "spare", 00:13:41.130 "uuid": "f40103be-02a4-55dd-934f-59923f838e19", 00:13:41.130 "is_configured": true, 00:13:41.130 "data_offset": 2048, 00:13:41.130 "data_size": 63488 00:13:41.130 }, 00:13:41.130 { 00:13:41.130 "name": null, 00:13:41.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.130 "is_configured": false, 00:13:41.130 "data_offset": 0, 00:13:41.130 "data_size": 63488 00:13:41.130 }, 00:13:41.130 { 00:13:41.130 "name": "BaseBdev3", 00:13:41.130 "uuid": "9f349b06-13c0-53cd-9348-f6e154f25920", 00:13:41.130 "is_configured": true, 00:13:41.130 "data_offset": 2048, 00:13:41.130 "data_size": 63488 00:13:41.130 }, 00:13:41.130 { 00:13:41.130 "name": "BaseBdev4", 00:13:41.130 "uuid": "271091ab-1131-5cd2-804f-06707a3d22b2", 00:13:41.130 "is_configured": true, 00:13:41.130 "data_offset": 2048, 00:13:41.130 "data_size": 63488 00:13:41.130 } 00:13:41.130 ] 00:13:41.130 }' 00:13:41.130 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.130 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:41.130 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.130 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:41.130 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:41.130 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:41.130 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.131 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:41.131 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:41.131 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.131 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.131 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.131 03:22:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.131 03:22:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.131 03:22:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.131 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.131 "name": "raid_bdev1", 00:13:41.131 "uuid": "a811aa47-a636-4deb-8316-6a2785af2e3e", 00:13:41.131 "strip_size_kb": 0, 00:13:41.131 "state": "online", 00:13:41.131 "raid_level": "raid1", 00:13:41.131 "superblock": true, 00:13:41.131 "num_base_bdevs": 4, 00:13:41.131 "num_base_bdevs_discovered": 3, 00:13:41.131 "num_base_bdevs_operational": 3, 00:13:41.131 "base_bdevs_list": [ 00:13:41.131 { 00:13:41.131 "name": "spare", 00:13:41.131 "uuid": "f40103be-02a4-55dd-934f-59923f838e19", 00:13:41.131 "is_configured": true, 00:13:41.131 "data_offset": 2048, 00:13:41.131 "data_size": 63488 00:13:41.131 }, 00:13:41.131 { 00:13:41.131 "name": null, 00:13:41.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.131 "is_configured": false, 00:13:41.131 "data_offset": 0, 00:13:41.131 "data_size": 63488 00:13:41.131 }, 00:13:41.131 { 00:13:41.131 "name": "BaseBdev3", 00:13:41.131 "uuid": "9f349b06-13c0-53cd-9348-f6e154f25920", 00:13:41.131 "is_configured": true, 00:13:41.131 "data_offset": 2048, 00:13:41.131 "data_size": 63488 00:13:41.131 }, 00:13:41.131 { 00:13:41.131 "name": "BaseBdev4", 00:13:41.131 "uuid": "271091ab-1131-5cd2-804f-06707a3d22b2", 00:13:41.131 "is_configured": true, 00:13:41.131 "data_offset": 2048, 00:13:41.131 "data_size": 63488 00:13:41.131 } 00:13:41.131 ] 00:13:41.131 }' 00:13:41.131 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.131 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:41.131 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.391 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:41.391 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:41.391 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.391 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.391 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.391 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.391 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:41.391 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.391 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.391 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.391 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.391 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.391 03:22:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.391 03:22:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.391 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.391 03:22:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.391 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.391 "name": "raid_bdev1", 00:13:41.391 "uuid": "a811aa47-a636-4deb-8316-6a2785af2e3e", 00:13:41.391 "strip_size_kb": 0, 00:13:41.391 "state": "online", 00:13:41.391 "raid_level": "raid1", 00:13:41.391 "superblock": true, 00:13:41.391 "num_base_bdevs": 4, 00:13:41.391 "num_base_bdevs_discovered": 3, 00:13:41.391 "num_base_bdevs_operational": 3, 00:13:41.391 "base_bdevs_list": [ 00:13:41.391 { 00:13:41.391 "name": "spare", 00:13:41.391 "uuid": "f40103be-02a4-55dd-934f-59923f838e19", 00:13:41.391 "is_configured": true, 00:13:41.391 "data_offset": 2048, 00:13:41.391 "data_size": 63488 00:13:41.391 }, 00:13:41.391 { 00:13:41.391 "name": null, 00:13:41.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.391 "is_configured": false, 00:13:41.391 "data_offset": 0, 00:13:41.391 "data_size": 63488 00:13:41.391 }, 00:13:41.391 { 00:13:41.391 "name": "BaseBdev3", 00:13:41.391 "uuid": "9f349b06-13c0-53cd-9348-f6e154f25920", 00:13:41.392 "is_configured": true, 00:13:41.392 "data_offset": 2048, 00:13:41.392 "data_size": 63488 00:13:41.392 }, 00:13:41.392 { 00:13:41.392 "name": "BaseBdev4", 00:13:41.392 "uuid": "271091ab-1131-5cd2-804f-06707a3d22b2", 00:13:41.392 "is_configured": true, 00:13:41.392 "data_offset": 2048, 00:13:41.392 "data_size": 63488 00:13:41.392 } 00:13:41.392 ] 00:13:41.392 }' 00:13:41.392 03:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.392 03:22:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.651 03:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:41.652 03:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.652 03:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.652 [2024-11-21 03:22:29.194771] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:41.652 [2024-11-21 03:22:29.194871] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:41.652 [2024-11-21 03:22:29.195002] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:41.652 [2024-11-21 03:22:29.195160] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:41.652 [2024-11-21 03:22:29.195216] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:13:41.652 03:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.652 03:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.652 03:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.652 03:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.652 03:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:41.652 03:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.910 03:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:41.910 03:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:41.910 03:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:41.910 03:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:41.910 03:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:41.910 03:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:41.910 03:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:41.910 03:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:41.910 03:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:41.910 03:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:41.910 03:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:41.910 03:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:41.910 03:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:41.910 /dev/nbd0 00:13:42.169 03:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:42.169 03:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:42.169 03:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:42.169 03:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:42.169 03:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:42.169 03:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:42.169 03:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:42.169 03:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:42.169 03:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:42.169 03:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:42.169 03:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:42.169 1+0 records in 00:13:42.169 1+0 records out 00:13:42.169 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387168 s, 10.6 MB/s 00:13:42.169 03:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:42.169 03:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:42.169 03:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:42.169 03:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:42.169 03:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:42.169 03:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:42.169 03:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:42.170 03:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:42.170 /dev/nbd1 00:13:42.429 03:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:42.429 03:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:42.429 03:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:42.429 03:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:42.429 03:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:42.429 03:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:42.429 03:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:42.429 03:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:42.429 03:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:42.429 03:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:42.429 03:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:42.429 1+0 records in 00:13:42.429 1+0 records out 00:13:42.429 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267847 s, 15.3 MB/s 00:13:42.429 03:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:42.429 03:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:42.429 03:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:42.429 03:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:42.429 03:22:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:42.429 03:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:42.429 03:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:42.429 03:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:42.429 03:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:42.430 03:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:42.430 03:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:42.430 03:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:42.430 03:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:42.430 03:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.430 03:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:42.689 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:42.689 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:42.689 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:42.689 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.689 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.689 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:42.689 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:42.689 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.689 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.689 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:42.949 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:42.949 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:42.949 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:42.949 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.949 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.949 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:42.949 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:42.949 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.949 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:42.949 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:42.949 03:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.950 03:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.950 03:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.950 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:42.950 03:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.950 03:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.950 [2024-11-21 03:22:30.279524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:42.950 [2024-11-21 03:22:30.279590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.950 [2024-11-21 03:22:30.279618] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:42.950 [2024-11-21 03:22:30.279627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.950 [2024-11-21 03:22:30.282093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.950 [2024-11-21 03:22:30.282131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:42.950 [2024-11-21 03:22:30.282215] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:42.950 [2024-11-21 03:22:30.282249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:42.950 [2024-11-21 03:22:30.282376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:42.950 [2024-11-21 03:22:30.282478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:42.950 spare 00:13:42.950 03:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.950 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:42.950 03:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.950 03:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.950 [2024-11-21 03:22:30.382546] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:42.950 [2024-11-21 03:22:30.382642] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:42.950 [2024-11-21 03:22:30.382981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2090 00:13:42.950 [2024-11-21 03:22:30.383173] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:42.950 [2024-11-21 03:22:30.383187] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:42.950 [2024-11-21 03:22:30.383319] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.950 03:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.950 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:42.950 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.950 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.950 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.950 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.950 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:42.950 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.950 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.950 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.950 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.950 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.950 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.950 03:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.950 03:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.950 03:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.950 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.950 "name": "raid_bdev1", 00:13:42.950 "uuid": "a811aa47-a636-4deb-8316-6a2785af2e3e", 00:13:42.950 "strip_size_kb": 0, 00:13:42.950 "state": "online", 00:13:42.950 "raid_level": "raid1", 00:13:42.950 "superblock": true, 00:13:42.950 "num_base_bdevs": 4, 00:13:42.950 "num_base_bdevs_discovered": 3, 00:13:42.950 "num_base_bdevs_operational": 3, 00:13:42.950 "base_bdevs_list": [ 00:13:42.950 { 00:13:42.950 "name": "spare", 00:13:42.950 "uuid": "f40103be-02a4-55dd-934f-59923f838e19", 00:13:42.950 "is_configured": true, 00:13:42.950 "data_offset": 2048, 00:13:42.950 "data_size": 63488 00:13:42.950 }, 00:13:42.950 { 00:13:42.950 "name": null, 00:13:42.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.950 "is_configured": false, 00:13:42.950 "data_offset": 2048, 00:13:42.950 "data_size": 63488 00:13:42.950 }, 00:13:42.950 { 00:13:42.950 "name": "BaseBdev3", 00:13:42.950 "uuid": "9f349b06-13c0-53cd-9348-f6e154f25920", 00:13:42.950 "is_configured": true, 00:13:42.950 "data_offset": 2048, 00:13:42.950 "data_size": 63488 00:13:42.950 }, 00:13:42.950 { 00:13:42.950 "name": "BaseBdev4", 00:13:42.950 "uuid": "271091ab-1131-5cd2-804f-06707a3d22b2", 00:13:42.950 "is_configured": true, 00:13:42.950 "data_offset": 2048, 00:13:42.950 "data_size": 63488 00:13:42.950 } 00:13:42.950 ] 00:13:42.950 }' 00:13:42.950 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.950 03:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.520 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:43.520 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.520 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:43.520 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:43.520 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.520 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.520 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.520 03:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.520 03:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.520 03:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.520 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.520 "name": "raid_bdev1", 00:13:43.520 "uuid": "a811aa47-a636-4deb-8316-6a2785af2e3e", 00:13:43.520 "strip_size_kb": 0, 00:13:43.520 "state": "online", 00:13:43.520 "raid_level": "raid1", 00:13:43.520 "superblock": true, 00:13:43.520 "num_base_bdevs": 4, 00:13:43.520 "num_base_bdevs_discovered": 3, 00:13:43.520 "num_base_bdevs_operational": 3, 00:13:43.520 "base_bdevs_list": [ 00:13:43.520 { 00:13:43.520 "name": "spare", 00:13:43.520 "uuid": "f40103be-02a4-55dd-934f-59923f838e19", 00:13:43.520 "is_configured": true, 00:13:43.520 "data_offset": 2048, 00:13:43.520 "data_size": 63488 00:13:43.520 }, 00:13:43.520 { 00:13:43.520 "name": null, 00:13:43.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.520 "is_configured": false, 00:13:43.520 "data_offset": 2048, 00:13:43.520 "data_size": 63488 00:13:43.520 }, 00:13:43.520 { 00:13:43.520 "name": "BaseBdev3", 00:13:43.520 "uuid": "9f349b06-13c0-53cd-9348-f6e154f25920", 00:13:43.520 "is_configured": true, 00:13:43.520 "data_offset": 2048, 00:13:43.520 "data_size": 63488 00:13:43.520 }, 00:13:43.520 { 00:13:43.520 "name": "BaseBdev4", 00:13:43.520 "uuid": "271091ab-1131-5cd2-804f-06707a3d22b2", 00:13:43.520 "is_configured": true, 00:13:43.520 "data_offset": 2048, 00:13:43.520 "data_size": 63488 00:13:43.520 } 00:13:43.520 ] 00:13:43.520 }' 00:13:43.520 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.520 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:43.520 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.520 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:43.520 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:43.520 03:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.520 03:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.520 03:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.520 03:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.520 03:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.520 03:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:43.520 03:22:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.520 03:22:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.520 [2024-11-21 03:22:31.027823] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:43.520 03:22:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.520 03:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:43.520 03:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.520 03:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.520 03:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.520 03:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.520 03:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:43.520 03:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.520 03:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.520 03:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.520 03:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.520 03:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.520 03:22:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.520 03:22:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.520 03:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.520 03:22:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.520 03:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.520 "name": "raid_bdev1", 00:13:43.520 "uuid": "a811aa47-a636-4deb-8316-6a2785af2e3e", 00:13:43.520 "strip_size_kb": 0, 00:13:43.520 "state": "online", 00:13:43.520 "raid_level": "raid1", 00:13:43.520 "superblock": true, 00:13:43.520 "num_base_bdevs": 4, 00:13:43.520 "num_base_bdevs_discovered": 2, 00:13:43.520 "num_base_bdevs_operational": 2, 00:13:43.520 "base_bdevs_list": [ 00:13:43.520 { 00:13:43.520 "name": null, 00:13:43.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.520 "is_configured": false, 00:13:43.520 "data_offset": 0, 00:13:43.520 "data_size": 63488 00:13:43.520 }, 00:13:43.520 { 00:13:43.520 "name": null, 00:13:43.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.520 "is_configured": false, 00:13:43.520 "data_offset": 2048, 00:13:43.521 "data_size": 63488 00:13:43.521 }, 00:13:43.521 { 00:13:43.521 "name": "BaseBdev3", 00:13:43.521 "uuid": "9f349b06-13c0-53cd-9348-f6e154f25920", 00:13:43.521 "is_configured": true, 00:13:43.521 "data_offset": 2048, 00:13:43.521 "data_size": 63488 00:13:43.521 }, 00:13:43.521 { 00:13:43.521 "name": "BaseBdev4", 00:13:43.521 "uuid": "271091ab-1131-5cd2-804f-06707a3d22b2", 00:13:43.521 "is_configured": true, 00:13:43.521 "data_offset": 2048, 00:13:43.521 "data_size": 63488 00:13:43.521 } 00:13:43.521 ] 00:13:43.521 }' 00:13:43.521 03:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.521 03:22:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.091 03:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:44.091 03:22:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.091 03:22:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.091 [2024-11-21 03:22:31.447966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:44.091 [2024-11-21 03:22:31.448258] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:44.091 [2024-11-21 03:22:31.448339] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:44.091 [2024-11-21 03:22:31.448400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:44.091 [2024-11-21 03:22:31.452518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2160 00:13:44.091 03:22:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.091 03:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:44.091 [2024-11-21 03:22:31.454556] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:45.030 03:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.030 03:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.030 03:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.030 03:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.030 03:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.030 03:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.030 03:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.030 03:22:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.030 03:22:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.030 03:22:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.030 03:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.030 "name": "raid_bdev1", 00:13:45.030 "uuid": "a811aa47-a636-4deb-8316-6a2785af2e3e", 00:13:45.030 "strip_size_kb": 0, 00:13:45.030 "state": "online", 00:13:45.030 "raid_level": "raid1", 00:13:45.030 "superblock": true, 00:13:45.030 "num_base_bdevs": 4, 00:13:45.030 "num_base_bdevs_discovered": 3, 00:13:45.030 "num_base_bdevs_operational": 3, 00:13:45.030 "process": { 00:13:45.030 "type": "rebuild", 00:13:45.030 "target": "spare", 00:13:45.030 "progress": { 00:13:45.030 "blocks": 20480, 00:13:45.030 "percent": 32 00:13:45.030 } 00:13:45.030 }, 00:13:45.030 "base_bdevs_list": [ 00:13:45.030 { 00:13:45.030 "name": "spare", 00:13:45.030 "uuid": "f40103be-02a4-55dd-934f-59923f838e19", 00:13:45.030 "is_configured": true, 00:13:45.030 "data_offset": 2048, 00:13:45.030 "data_size": 63488 00:13:45.030 }, 00:13:45.030 { 00:13:45.030 "name": null, 00:13:45.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.030 "is_configured": false, 00:13:45.030 "data_offset": 2048, 00:13:45.030 "data_size": 63488 00:13:45.030 }, 00:13:45.030 { 00:13:45.030 "name": "BaseBdev3", 00:13:45.030 "uuid": "9f349b06-13c0-53cd-9348-f6e154f25920", 00:13:45.030 "is_configured": true, 00:13:45.030 "data_offset": 2048, 00:13:45.030 "data_size": 63488 00:13:45.030 }, 00:13:45.030 { 00:13:45.030 "name": "BaseBdev4", 00:13:45.030 "uuid": "271091ab-1131-5cd2-804f-06707a3d22b2", 00:13:45.030 "is_configured": true, 00:13:45.030 "data_offset": 2048, 00:13:45.030 "data_size": 63488 00:13:45.030 } 00:13:45.030 ] 00:13:45.030 }' 00:13:45.030 03:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.030 03:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.030 03:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.290 03:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.290 03:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:45.290 03:22:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.290 03:22:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.290 [2024-11-21 03:22:32.609507] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:45.290 [2024-11-21 03:22:32.661144] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:45.290 [2024-11-21 03:22:32.661225] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.290 [2024-11-21 03:22:32.661245] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:45.290 [2024-11-21 03:22:32.661252] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:45.290 03:22:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.290 03:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:45.290 03:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.290 03:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.290 03:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.290 03:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.290 03:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:45.290 03:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.290 03:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.290 03:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.290 03:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.290 03:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.290 03:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.290 03:22:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.290 03:22:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.290 03:22:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.290 03:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.290 "name": "raid_bdev1", 00:13:45.290 "uuid": "a811aa47-a636-4deb-8316-6a2785af2e3e", 00:13:45.290 "strip_size_kb": 0, 00:13:45.290 "state": "online", 00:13:45.290 "raid_level": "raid1", 00:13:45.290 "superblock": true, 00:13:45.290 "num_base_bdevs": 4, 00:13:45.290 "num_base_bdevs_discovered": 2, 00:13:45.290 "num_base_bdevs_operational": 2, 00:13:45.290 "base_bdevs_list": [ 00:13:45.290 { 00:13:45.290 "name": null, 00:13:45.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.290 "is_configured": false, 00:13:45.290 "data_offset": 0, 00:13:45.290 "data_size": 63488 00:13:45.290 }, 00:13:45.290 { 00:13:45.290 "name": null, 00:13:45.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.290 "is_configured": false, 00:13:45.290 "data_offset": 2048, 00:13:45.290 "data_size": 63488 00:13:45.290 }, 00:13:45.290 { 00:13:45.290 "name": "BaseBdev3", 00:13:45.290 "uuid": "9f349b06-13c0-53cd-9348-f6e154f25920", 00:13:45.290 "is_configured": true, 00:13:45.290 "data_offset": 2048, 00:13:45.290 "data_size": 63488 00:13:45.290 }, 00:13:45.290 { 00:13:45.290 "name": "BaseBdev4", 00:13:45.290 "uuid": "271091ab-1131-5cd2-804f-06707a3d22b2", 00:13:45.290 "is_configured": true, 00:13:45.290 "data_offset": 2048, 00:13:45.290 "data_size": 63488 00:13:45.290 } 00:13:45.290 ] 00:13:45.290 }' 00:13:45.290 03:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.290 03:22:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.860 03:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:45.860 03:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.860 03:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.860 [2024-11-21 03:22:33.121759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:45.860 [2024-11-21 03:22:33.121893] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.860 [2024-11-21 03:22:33.121926] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:13:45.860 [2024-11-21 03:22:33.121936] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.860 [2024-11-21 03:22:33.122395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.860 [2024-11-21 03:22:33.122424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:45.860 [2024-11-21 03:22:33.122527] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:45.860 [2024-11-21 03:22:33.122540] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:45.860 [2024-11-21 03:22:33.122553] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:45.860 [2024-11-21 03:22:33.122582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:45.860 [2024-11-21 03:22:33.126709] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2230 00:13:45.860 spare 00:13:45.860 03:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.860 03:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:45.860 [2024-11-21 03:22:33.128804] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:46.800 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:46.800 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.800 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:46.800 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:46.800 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.800 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.800 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.800 03:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.800 03:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.800 03:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.800 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.800 "name": "raid_bdev1", 00:13:46.800 "uuid": "a811aa47-a636-4deb-8316-6a2785af2e3e", 00:13:46.800 "strip_size_kb": 0, 00:13:46.800 "state": "online", 00:13:46.800 "raid_level": "raid1", 00:13:46.800 "superblock": true, 00:13:46.800 "num_base_bdevs": 4, 00:13:46.800 "num_base_bdevs_discovered": 3, 00:13:46.800 "num_base_bdevs_operational": 3, 00:13:46.800 "process": { 00:13:46.800 "type": "rebuild", 00:13:46.800 "target": "spare", 00:13:46.800 "progress": { 00:13:46.800 "blocks": 20480, 00:13:46.800 "percent": 32 00:13:46.800 } 00:13:46.800 }, 00:13:46.801 "base_bdevs_list": [ 00:13:46.801 { 00:13:46.801 "name": "spare", 00:13:46.801 "uuid": "f40103be-02a4-55dd-934f-59923f838e19", 00:13:46.801 "is_configured": true, 00:13:46.801 "data_offset": 2048, 00:13:46.801 "data_size": 63488 00:13:46.801 }, 00:13:46.801 { 00:13:46.801 "name": null, 00:13:46.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.801 "is_configured": false, 00:13:46.801 "data_offset": 2048, 00:13:46.801 "data_size": 63488 00:13:46.801 }, 00:13:46.801 { 00:13:46.801 "name": "BaseBdev3", 00:13:46.801 "uuid": "9f349b06-13c0-53cd-9348-f6e154f25920", 00:13:46.801 "is_configured": true, 00:13:46.801 "data_offset": 2048, 00:13:46.801 "data_size": 63488 00:13:46.801 }, 00:13:46.801 { 00:13:46.801 "name": "BaseBdev4", 00:13:46.801 "uuid": "271091ab-1131-5cd2-804f-06707a3d22b2", 00:13:46.801 "is_configured": true, 00:13:46.801 "data_offset": 2048, 00:13:46.801 "data_size": 63488 00:13:46.801 } 00:13:46.801 ] 00:13:46.801 }' 00:13:46.801 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.801 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:46.801 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.801 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:46.801 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:46.801 03:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.801 03:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.801 [2024-11-21 03:22:34.263440] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:46.801 [2024-11-21 03:22:34.335374] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:46.801 [2024-11-21 03:22:34.335440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.801 [2024-11-21 03:22:34.335456] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:46.801 [2024-11-21 03:22:34.335465] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:46.801 03:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.801 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:46.801 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.801 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.801 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.801 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.801 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:46.801 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.801 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.801 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.801 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.801 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.801 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.801 03:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.801 03:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.061 03:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.061 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.061 "name": "raid_bdev1", 00:13:47.061 "uuid": "a811aa47-a636-4deb-8316-6a2785af2e3e", 00:13:47.061 "strip_size_kb": 0, 00:13:47.061 "state": "online", 00:13:47.061 "raid_level": "raid1", 00:13:47.061 "superblock": true, 00:13:47.061 "num_base_bdevs": 4, 00:13:47.061 "num_base_bdevs_discovered": 2, 00:13:47.061 "num_base_bdevs_operational": 2, 00:13:47.061 "base_bdevs_list": [ 00:13:47.061 { 00:13:47.061 "name": null, 00:13:47.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.061 "is_configured": false, 00:13:47.061 "data_offset": 0, 00:13:47.061 "data_size": 63488 00:13:47.061 }, 00:13:47.061 { 00:13:47.061 "name": null, 00:13:47.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.061 "is_configured": false, 00:13:47.061 "data_offset": 2048, 00:13:47.061 "data_size": 63488 00:13:47.061 }, 00:13:47.061 { 00:13:47.061 "name": "BaseBdev3", 00:13:47.061 "uuid": "9f349b06-13c0-53cd-9348-f6e154f25920", 00:13:47.061 "is_configured": true, 00:13:47.061 "data_offset": 2048, 00:13:47.061 "data_size": 63488 00:13:47.061 }, 00:13:47.061 { 00:13:47.061 "name": "BaseBdev4", 00:13:47.061 "uuid": "271091ab-1131-5cd2-804f-06707a3d22b2", 00:13:47.061 "is_configured": true, 00:13:47.061 "data_offset": 2048, 00:13:47.061 "data_size": 63488 00:13:47.061 } 00:13:47.061 ] 00:13:47.061 }' 00:13:47.061 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.061 03:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.320 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:47.320 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.320 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:47.320 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:47.320 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.320 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.320 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.320 03:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.320 03:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.320 03:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.320 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.320 "name": "raid_bdev1", 00:13:47.320 "uuid": "a811aa47-a636-4deb-8316-6a2785af2e3e", 00:13:47.320 "strip_size_kb": 0, 00:13:47.320 "state": "online", 00:13:47.320 "raid_level": "raid1", 00:13:47.320 "superblock": true, 00:13:47.320 "num_base_bdevs": 4, 00:13:47.320 "num_base_bdevs_discovered": 2, 00:13:47.320 "num_base_bdevs_operational": 2, 00:13:47.321 "base_bdevs_list": [ 00:13:47.321 { 00:13:47.321 "name": null, 00:13:47.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.321 "is_configured": false, 00:13:47.321 "data_offset": 0, 00:13:47.321 "data_size": 63488 00:13:47.321 }, 00:13:47.321 { 00:13:47.321 "name": null, 00:13:47.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.321 "is_configured": false, 00:13:47.321 "data_offset": 2048, 00:13:47.321 "data_size": 63488 00:13:47.321 }, 00:13:47.321 { 00:13:47.321 "name": "BaseBdev3", 00:13:47.321 "uuid": "9f349b06-13c0-53cd-9348-f6e154f25920", 00:13:47.321 "is_configured": true, 00:13:47.321 "data_offset": 2048, 00:13:47.321 "data_size": 63488 00:13:47.321 }, 00:13:47.321 { 00:13:47.321 "name": "BaseBdev4", 00:13:47.321 "uuid": "271091ab-1131-5cd2-804f-06707a3d22b2", 00:13:47.321 "is_configured": true, 00:13:47.321 "data_offset": 2048, 00:13:47.321 "data_size": 63488 00:13:47.321 } 00:13:47.321 ] 00:13:47.321 }' 00:13:47.321 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.580 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:47.580 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.580 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:47.580 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:47.580 03:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.581 03:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.581 03:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.581 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:47.581 03:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.581 03:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.581 [2024-11-21 03:22:34.980056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:47.581 [2024-11-21 03:22:34.980114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.581 [2024-11-21 03:22:34.980138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:13:47.581 [2024-11-21 03:22:34.980150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.581 [2024-11-21 03:22:34.980565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.581 [2024-11-21 03:22:34.980595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:47.581 [2024-11-21 03:22:34.980667] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:47.581 [2024-11-21 03:22:34.980695] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:47.581 [2024-11-21 03:22:34.980704] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:47.581 [2024-11-21 03:22:34.980717] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:47.581 BaseBdev1 00:13:47.581 03:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.581 03:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:48.520 03:22:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:48.520 03:22:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.520 03:22:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.520 03:22:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.520 03:22:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.520 03:22:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:48.520 03:22:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.520 03:22:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.520 03:22:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.520 03:22:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.520 03:22:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.520 03:22:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.520 03:22:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.520 03:22:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.520 03:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.520 03:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.520 "name": "raid_bdev1", 00:13:48.520 "uuid": "a811aa47-a636-4deb-8316-6a2785af2e3e", 00:13:48.520 "strip_size_kb": 0, 00:13:48.520 "state": "online", 00:13:48.520 "raid_level": "raid1", 00:13:48.520 "superblock": true, 00:13:48.520 "num_base_bdevs": 4, 00:13:48.520 "num_base_bdevs_discovered": 2, 00:13:48.520 "num_base_bdevs_operational": 2, 00:13:48.520 "base_bdevs_list": [ 00:13:48.520 { 00:13:48.521 "name": null, 00:13:48.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.521 "is_configured": false, 00:13:48.521 "data_offset": 0, 00:13:48.521 "data_size": 63488 00:13:48.521 }, 00:13:48.521 { 00:13:48.521 "name": null, 00:13:48.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.521 "is_configured": false, 00:13:48.521 "data_offset": 2048, 00:13:48.521 "data_size": 63488 00:13:48.521 }, 00:13:48.521 { 00:13:48.521 "name": "BaseBdev3", 00:13:48.521 "uuid": "9f349b06-13c0-53cd-9348-f6e154f25920", 00:13:48.521 "is_configured": true, 00:13:48.521 "data_offset": 2048, 00:13:48.521 "data_size": 63488 00:13:48.521 }, 00:13:48.521 { 00:13:48.521 "name": "BaseBdev4", 00:13:48.521 "uuid": "271091ab-1131-5cd2-804f-06707a3d22b2", 00:13:48.521 "is_configured": true, 00:13:48.521 "data_offset": 2048, 00:13:48.521 "data_size": 63488 00:13:48.521 } 00:13:48.521 ] 00:13:48.521 }' 00:13:48.521 03:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.521 03:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.091 03:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:49.091 03:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.091 03:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:49.091 03:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:49.091 03:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.091 03:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.091 03:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.091 03:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.091 03:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.091 03:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.091 03:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.091 "name": "raid_bdev1", 00:13:49.091 "uuid": "a811aa47-a636-4deb-8316-6a2785af2e3e", 00:13:49.091 "strip_size_kb": 0, 00:13:49.091 "state": "online", 00:13:49.091 "raid_level": "raid1", 00:13:49.091 "superblock": true, 00:13:49.091 "num_base_bdevs": 4, 00:13:49.091 "num_base_bdevs_discovered": 2, 00:13:49.091 "num_base_bdevs_operational": 2, 00:13:49.091 "base_bdevs_list": [ 00:13:49.091 { 00:13:49.091 "name": null, 00:13:49.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.091 "is_configured": false, 00:13:49.091 "data_offset": 0, 00:13:49.091 "data_size": 63488 00:13:49.091 }, 00:13:49.091 { 00:13:49.091 "name": null, 00:13:49.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.091 "is_configured": false, 00:13:49.091 "data_offset": 2048, 00:13:49.091 "data_size": 63488 00:13:49.091 }, 00:13:49.091 { 00:13:49.091 "name": "BaseBdev3", 00:13:49.091 "uuid": "9f349b06-13c0-53cd-9348-f6e154f25920", 00:13:49.091 "is_configured": true, 00:13:49.091 "data_offset": 2048, 00:13:49.091 "data_size": 63488 00:13:49.091 }, 00:13:49.091 { 00:13:49.091 "name": "BaseBdev4", 00:13:49.091 "uuid": "271091ab-1131-5cd2-804f-06707a3d22b2", 00:13:49.091 "is_configured": true, 00:13:49.091 "data_offset": 2048, 00:13:49.091 "data_size": 63488 00:13:49.091 } 00:13:49.091 ] 00:13:49.091 }' 00:13:49.091 03:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.091 03:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:49.091 03:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.091 03:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:49.091 03:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:49.091 03:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:49.091 03:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:49.091 03:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:49.091 03:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:49.091 03:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:49.091 03:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:49.091 03:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:49.091 03:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.091 03:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.091 [2024-11-21 03:22:36.584602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:49.091 [2024-11-21 03:22:36.584771] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:49.091 [2024-11-21 03:22:36.584784] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:49.091 request: 00:13:49.091 { 00:13:49.091 "base_bdev": "BaseBdev1", 00:13:49.091 "raid_bdev": "raid_bdev1", 00:13:49.091 "method": "bdev_raid_add_base_bdev", 00:13:49.091 "req_id": 1 00:13:49.091 } 00:13:49.091 Got JSON-RPC error response 00:13:49.091 response: 00:13:49.091 { 00:13:49.091 "code": -22, 00:13:49.091 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:49.091 } 00:13:49.091 03:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:49.091 03:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:49.091 03:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:49.091 03:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:49.091 03:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:49.091 03:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:50.473 03:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:50.473 03:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.473 03:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.473 03:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.473 03:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.473 03:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:50.473 03:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.473 03:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.473 03:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.473 03:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.473 03:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.473 03:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.473 03:22:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.473 03:22:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.473 03:22:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.473 03:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.473 "name": "raid_bdev1", 00:13:50.473 "uuid": "a811aa47-a636-4deb-8316-6a2785af2e3e", 00:13:50.473 "strip_size_kb": 0, 00:13:50.473 "state": "online", 00:13:50.473 "raid_level": "raid1", 00:13:50.473 "superblock": true, 00:13:50.473 "num_base_bdevs": 4, 00:13:50.473 "num_base_bdevs_discovered": 2, 00:13:50.473 "num_base_bdevs_operational": 2, 00:13:50.473 "base_bdevs_list": [ 00:13:50.473 { 00:13:50.473 "name": null, 00:13:50.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.473 "is_configured": false, 00:13:50.473 "data_offset": 0, 00:13:50.473 "data_size": 63488 00:13:50.473 }, 00:13:50.473 { 00:13:50.473 "name": null, 00:13:50.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.473 "is_configured": false, 00:13:50.473 "data_offset": 2048, 00:13:50.473 "data_size": 63488 00:13:50.473 }, 00:13:50.473 { 00:13:50.473 "name": "BaseBdev3", 00:13:50.473 "uuid": "9f349b06-13c0-53cd-9348-f6e154f25920", 00:13:50.473 "is_configured": true, 00:13:50.473 "data_offset": 2048, 00:13:50.473 "data_size": 63488 00:13:50.473 }, 00:13:50.473 { 00:13:50.473 "name": "BaseBdev4", 00:13:50.473 "uuid": "271091ab-1131-5cd2-804f-06707a3d22b2", 00:13:50.473 "is_configured": true, 00:13:50.473 "data_offset": 2048, 00:13:50.473 "data_size": 63488 00:13:50.473 } 00:13:50.473 ] 00:13:50.473 }' 00:13:50.473 03:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.473 03:22:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.473 03:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:50.473 03:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.473 03:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:50.473 03:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:50.473 03:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.473 03:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.473 03:22:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.473 03:22:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.473 03:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.733 03:22:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.733 03:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.733 "name": "raid_bdev1", 00:13:50.733 "uuid": "a811aa47-a636-4deb-8316-6a2785af2e3e", 00:13:50.733 "strip_size_kb": 0, 00:13:50.733 "state": "online", 00:13:50.733 "raid_level": "raid1", 00:13:50.733 "superblock": true, 00:13:50.733 "num_base_bdevs": 4, 00:13:50.733 "num_base_bdevs_discovered": 2, 00:13:50.733 "num_base_bdevs_operational": 2, 00:13:50.733 "base_bdevs_list": [ 00:13:50.733 { 00:13:50.733 "name": null, 00:13:50.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.733 "is_configured": false, 00:13:50.733 "data_offset": 0, 00:13:50.733 "data_size": 63488 00:13:50.733 }, 00:13:50.733 { 00:13:50.733 "name": null, 00:13:50.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.733 "is_configured": false, 00:13:50.733 "data_offset": 2048, 00:13:50.733 "data_size": 63488 00:13:50.733 }, 00:13:50.733 { 00:13:50.733 "name": "BaseBdev3", 00:13:50.733 "uuid": "9f349b06-13c0-53cd-9348-f6e154f25920", 00:13:50.733 "is_configured": true, 00:13:50.733 "data_offset": 2048, 00:13:50.733 "data_size": 63488 00:13:50.733 }, 00:13:50.733 { 00:13:50.733 "name": "BaseBdev4", 00:13:50.733 "uuid": "271091ab-1131-5cd2-804f-06707a3d22b2", 00:13:50.733 "is_configured": true, 00:13:50.733 "data_offset": 2048, 00:13:50.733 "data_size": 63488 00:13:50.733 } 00:13:50.733 ] 00:13:50.733 }' 00:13:50.733 03:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.733 03:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:50.733 03:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.733 03:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:50.733 03:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 90659 00:13:50.733 03:22:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 90659 ']' 00:13:50.733 03:22:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 90659 00:13:50.733 03:22:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:50.733 03:22:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:50.733 03:22:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90659 00:13:50.733 03:22:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:50.733 killing process with pid 90659 00:13:50.733 Received shutdown signal, test time was about 60.000000 seconds 00:13:50.733 00:13:50.733 Latency(us) 00:13:50.733 [2024-11-21T03:22:38.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.733 [2024-11-21T03:22:38.299Z] =================================================================================================================== 00:13:50.733 [2024-11-21T03:22:38.299Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:50.733 03:22:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:50.733 03:22:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90659' 00:13:50.733 03:22:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 90659 00:13:50.733 [2024-11-21 03:22:38.176499] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:50.733 [2024-11-21 03:22:38.176622] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:50.733 03:22:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 90659 00:13:50.733 [2024-11-21 03:22:38.176690] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:50.733 [2024-11-21 03:22:38.176701] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:50.733 [2024-11-21 03:22:38.228545] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:50.993 ************************************ 00:13:50.993 END TEST raid_rebuild_test_sb 00:13:50.993 ************************************ 00:13:50.993 03:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:50.993 00:13:50.993 real 0m23.363s 00:13:50.993 user 0m28.689s 00:13:50.993 sys 0m3.693s 00:13:50.993 03:22:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:50.993 03:22:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.993 03:22:38 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:13:50.993 03:22:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:50.993 03:22:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:50.993 03:22:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:50.993 ************************************ 00:13:50.993 START TEST raid_rebuild_test_io 00:13:50.993 ************************************ 00:13:50.993 03:22:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:13:50.993 03:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:50.993 03:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:50.993 03:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:50.993 03:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:50.993 03:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:50.993 03:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:50.993 03:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:50.993 03:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:50.993 03:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:50.993 03:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:50.993 03:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:50.993 03:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:50.993 03:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:50.993 03:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:50.993 03:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:50.993 03:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:50.993 03:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:50.993 03:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:50.994 03:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:50.994 03:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:50.994 03:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:50.994 03:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:50.994 03:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:50.994 03:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:50.994 03:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:50.994 03:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:50.994 03:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:50.994 03:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:50.994 03:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:50.994 03:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=91401 00:13:50.994 03:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:50.994 03:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 91401 00:13:50.994 03:22:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 91401 ']' 00:13:50.994 03:22:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.994 03:22:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:50.994 03:22:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.994 03:22:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:50.994 03:22:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.253 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:51.254 Zero copy mechanism will not be used. 00:13:51.254 [2024-11-21 03:22:38.627734] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:13:51.254 [2024-11-21 03:22:38.628379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91401 ] 00:13:51.254 [2024-11-21 03:22:38.769588] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:51.254 [2024-11-21 03:22:38.792246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.513 [2024-11-21 03:22:38.821901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.513 [2024-11-21 03:22:38.866261] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.513 [2024-11-21 03:22:38.866294] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.084 BaseBdev1_malloc 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.084 [2024-11-21 03:22:39.474361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:52.084 [2024-11-21 03:22:39.474433] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.084 [2024-11-21 03:22:39.474470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:52.084 [2024-11-21 03:22:39.474484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.084 [2024-11-21 03:22:39.476645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.084 [2024-11-21 03:22:39.476698] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:52.084 BaseBdev1 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.084 BaseBdev2_malloc 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.084 [2024-11-21 03:22:39.503490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:52.084 [2024-11-21 03:22:39.503556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.084 [2024-11-21 03:22:39.503575] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:52.084 [2024-11-21 03:22:39.503586] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.084 [2024-11-21 03:22:39.505829] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.084 [2024-11-21 03:22:39.505944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:52.084 BaseBdev2 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.084 BaseBdev3_malloc 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.084 [2024-11-21 03:22:39.532601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:52.084 [2024-11-21 03:22:39.532744] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.084 [2024-11-21 03:22:39.532770] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:52.084 [2024-11-21 03:22:39.532784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.084 [2024-11-21 03:22:39.534884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.084 [2024-11-21 03:22:39.534953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:52.084 BaseBdev3 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.084 BaseBdev4_malloc 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.084 [2024-11-21 03:22:39.575849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:52.084 [2024-11-21 03:22:39.575993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.084 [2024-11-21 03:22:39.576062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:52.084 [2024-11-21 03:22:39.576112] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.084 [2024-11-21 03:22:39.578667] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.084 [2024-11-21 03:22:39.578763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:52.084 BaseBdev4 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.084 spare_malloc 00:13:52.084 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.085 03:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:52.085 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.085 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.085 spare_delay 00:13:52.085 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.085 03:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:52.085 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.085 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.085 [2024-11-21 03:22:39.616604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:52.085 [2024-11-21 03:22:39.616676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.085 [2024-11-21 03:22:39.616700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:52.085 [2024-11-21 03:22:39.616713] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.085 [2024-11-21 03:22:39.618787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.085 [2024-11-21 03:22:39.618829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:52.085 spare 00:13:52.085 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.085 03:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:52.085 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.085 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.085 [2024-11-21 03:22:39.628682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:52.085 [2024-11-21 03:22:39.630513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:52.085 [2024-11-21 03:22:39.630650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:52.085 [2024-11-21 03:22:39.630704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:52.085 [2024-11-21 03:22:39.630780] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:13:52.085 [2024-11-21 03:22:39.630794] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:52.085 [2024-11-21 03:22:39.631080] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:52.085 [2024-11-21 03:22:39.631274] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:13:52.085 [2024-11-21 03:22:39.631286] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:13:52.085 [2024-11-21 03:22:39.631429] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.085 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.085 03:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:52.085 03:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.085 03:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.085 03:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.085 03:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.085 03:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.085 03:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.085 03:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.085 03:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.085 03:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.085 03:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.085 03:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.085 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.085 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.344 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.344 03:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.344 "name": "raid_bdev1", 00:13:52.344 "uuid": "82315d0b-d2f1-4b49-9e2b-69f413d5cc30", 00:13:52.344 "strip_size_kb": 0, 00:13:52.344 "state": "online", 00:13:52.344 "raid_level": "raid1", 00:13:52.344 "superblock": false, 00:13:52.344 "num_base_bdevs": 4, 00:13:52.344 "num_base_bdevs_discovered": 4, 00:13:52.344 "num_base_bdevs_operational": 4, 00:13:52.344 "base_bdevs_list": [ 00:13:52.344 { 00:13:52.344 "name": "BaseBdev1", 00:13:52.344 "uuid": "0fe34ff0-d280-5e32-a321-ad0abfcd0622", 00:13:52.344 "is_configured": true, 00:13:52.344 "data_offset": 0, 00:13:52.344 "data_size": 65536 00:13:52.344 }, 00:13:52.344 { 00:13:52.344 "name": "BaseBdev2", 00:13:52.344 "uuid": "22ec6057-52d1-5d50-8c4a-8a2891b4356c", 00:13:52.344 "is_configured": true, 00:13:52.344 "data_offset": 0, 00:13:52.344 "data_size": 65536 00:13:52.344 }, 00:13:52.344 { 00:13:52.344 "name": "BaseBdev3", 00:13:52.344 "uuid": "8b49aec2-fddc-573b-80b6-7fe625e0954a", 00:13:52.344 "is_configured": true, 00:13:52.344 "data_offset": 0, 00:13:52.344 "data_size": 65536 00:13:52.344 }, 00:13:52.344 { 00:13:52.344 "name": "BaseBdev4", 00:13:52.344 "uuid": "5b2ea7f0-93c1-5828-94d6-7413b6ff46fb", 00:13:52.344 "is_configured": true, 00:13:52.344 "data_offset": 0, 00:13:52.344 "data_size": 65536 00:13:52.344 } 00:13:52.344 ] 00:13:52.344 }' 00:13:52.344 03:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.344 03:22:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.604 03:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:52.604 03:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:52.604 03:22:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.604 03:22:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.604 [2024-11-21 03:22:40.053106] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:52.604 03:22:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.604 03:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:52.604 03:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.604 03:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:52.604 03:22:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.604 03:22:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.604 03:22:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.604 03:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:52.604 03:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:52.604 03:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:52.604 03:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:52.604 03:22:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.604 03:22:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.604 [2024-11-21 03:22:40.144794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:52.604 03:22:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.604 03:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:52.604 03:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.604 03:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.604 03:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.604 03:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.604 03:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:52.604 03:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.604 03:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.604 03:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.604 03:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.604 03:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.604 03:22:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.604 03:22:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.604 03:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.865 03:22:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.865 03:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.865 "name": "raid_bdev1", 00:13:52.865 "uuid": "82315d0b-d2f1-4b49-9e2b-69f413d5cc30", 00:13:52.865 "strip_size_kb": 0, 00:13:52.865 "state": "online", 00:13:52.865 "raid_level": "raid1", 00:13:52.865 "superblock": false, 00:13:52.865 "num_base_bdevs": 4, 00:13:52.865 "num_base_bdevs_discovered": 3, 00:13:52.865 "num_base_bdevs_operational": 3, 00:13:52.865 "base_bdevs_list": [ 00:13:52.865 { 00:13:52.865 "name": null, 00:13:52.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.865 "is_configured": false, 00:13:52.865 "data_offset": 0, 00:13:52.865 "data_size": 65536 00:13:52.865 }, 00:13:52.865 { 00:13:52.865 "name": "BaseBdev2", 00:13:52.865 "uuid": "22ec6057-52d1-5d50-8c4a-8a2891b4356c", 00:13:52.865 "is_configured": true, 00:13:52.865 "data_offset": 0, 00:13:52.865 "data_size": 65536 00:13:52.865 }, 00:13:52.865 { 00:13:52.865 "name": "BaseBdev3", 00:13:52.865 "uuid": "8b49aec2-fddc-573b-80b6-7fe625e0954a", 00:13:52.865 "is_configured": true, 00:13:52.865 "data_offset": 0, 00:13:52.865 "data_size": 65536 00:13:52.865 }, 00:13:52.865 { 00:13:52.865 "name": "BaseBdev4", 00:13:52.865 "uuid": "5b2ea7f0-93c1-5828-94d6-7413b6ff46fb", 00:13:52.865 "is_configured": true, 00:13:52.865 "data_offset": 0, 00:13:52.865 "data_size": 65536 00:13:52.865 } 00:13:52.865 ] 00:13:52.865 }' 00:13:52.865 03:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.865 03:22:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.865 [2024-11-21 03:22:40.230807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:13:52.865 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:52.865 Zero copy mechanism will not be used. 00:13:52.865 Running I/O for 60 seconds... 00:13:53.125 03:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:53.125 03:22:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.125 03:22:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.125 [2024-11-21 03:22:40.583978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:53.125 03:22:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.125 03:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:53.125 [2024-11-21 03:22:40.650738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:53.125 [2024-11-21 03:22:40.653019] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:53.384 [2024-11-21 03:22:40.762995] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:53.384 [2024-11-21 03:22:40.763532] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:53.384 [2024-11-21 03:22:40.865607] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:53.384 [2024-11-21 03:22:40.865940] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:53.952 186.00 IOPS, 558.00 MiB/s [2024-11-21T03:22:41.518Z] [2024-11-21 03:22:41.248691] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:54.211 [2024-11-21 03:22:41.577252] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:54.211 03:22:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:54.211 03:22:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.211 03:22:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:54.211 03:22:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:54.211 03:22:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.211 03:22:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.211 03:22:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.211 03:22:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.211 03:22:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.211 03:22:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.211 03:22:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.211 "name": "raid_bdev1", 00:13:54.211 "uuid": "82315d0b-d2f1-4b49-9e2b-69f413d5cc30", 00:13:54.211 "strip_size_kb": 0, 00:13:54.211 "state": "online", 00:13:54.211 "raid_level": "raid1", 00:13:54.211 "superblock": false, 00:13:54.211 "num_base_bdevs": 4, 00:13:54.211 "num_base_bdevs_discovered": 4, 00:13:54.211 "num_base_bdevs_operational": 4, 00:13:54.211 "process": { 00:13:54.211 "type": "rebuild", 00:13:54.211 "target": "spare", 00:13:54.211 "progress": { 00:13:54.211 "blocks": 14336, 00:13:54.211 "percent": 21 00:13:54.211 } 00:13:54.211 }, 00:13:54.211 "base_bdevs_list": [ 00:13:54.211 { 00:13:54.211 "name": "spare", 00:13:54.211 "uuid": "e4eead7f-9bed-5372-ab74-e766b4bf972f", 00:13:54.211 "is_configured": true, 00:13:54.211 "data_offset": 0, 00:13:54.211 "data_size": 65536 00:13:54.211 }, 00:13:54.211 { 00:13:54.211 "name": "BaseBdev2", 00:13:54.211 "uuid": "22ec6057-52d1-5d50-8c4a-8a2891b4356c", 00:13:54.211 "is_configured": true, 00:13:54.211 "data_offset": 0, 00:13:54.211 "data_size": 65536 00:13:54.211 }, 00:13:54.211 { 00:13:54.211 "name": "BaseBdev3", 00:13:54.211 "uuid": "8b49aec2-fddc-573b-80b6-7fe625e0954a", 00:13:54.211 "is_configured": true, 00:13:54.211 "data_offset": 0, 00:13:54.211 "data_size": 65536 00:13:54.211 }, 00:13:54.211 { 00:13:54.211 "name": "BaseBdev4", 00:13:54.211 "uuid": "5b2ea7f0-93c1-5828-94d6-7413b6ff46fb", 00:13:54.211 "is_configured": true, 00:13:54.211 "data_offset": 0, 00:13:54.211 "data_size": 65536 00:13:54.211 } 00:13:54.211 ] 00:13:54.211 }' 00:13:54.211 03:22:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.211 [2024-11-21 03:22:41.696163] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:54.211 03:22:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:54.211 03:22:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.211 03:22:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:54.211 03:22:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:54.211 03:22:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.211 03:22:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.211 [2024-11-21 03:22:41.764295] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:54.471 [2024-11-21 03:22:41.814598] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:54.471 [2024-11-21 03:22:41.829795] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.471 [2024-11-21 03:22:41.829895] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:54.471 [2024-11-21 03:22:41.829910] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:54.471 [2024-11-21 03:22:41.848251] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006630 00:13:54.471 03:22:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.471 03:22:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:54.471 03:22:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.471 03:22:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.471 03:22:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.471 03:22:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.471 03:22:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:54.471 03:22:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.471 03:22:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.471 03:22:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.471 03:22:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.471 03:22:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.471 03:22:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.471 03:22:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.471 03:22:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.471 03:22:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.471 03:22:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.471 "name": "raid_bdev1", 00:13:54.471 "uuid": "82315d0b-d2f1-4b49-9e2b-69f413d5cc30", 00:13:54.471 "strip_size_kb": 0, 00:13:54.471 "state": "online", 00:13:54.471 "raid_level": "raid1", 00:13:54.471 "superblock": false, 00:13:54.471 "num_base_bdevs": 4, 00:13:54.471 "num_base_bdevs_discovered": 3, 00:13:54.471 "num_base_bdevs_operational": 3, 00:13:54.471 "base_bdevs_list": [ 00:13:54.471 { 00:13:54.471 "name": null, 00:13:54.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.471 "is_configured": false, 00:13:54.471 "data_offset": 0, 00:13:54.471 "data_size": 65536 00:13:54.471 }, 00:13:54.471 { 00:13:54.471 "name": "BaseBdev2", 00:13:54.471 "uuid": "22ec6057-52d1-5d50-8c4a-8a2891b4356c", 00:13:54.471 "is_configured": true, 00:13:54.471 "data_offset": 0, 00:13:54.471 "data_size": 65536 00:13:54.471 }, 00:13:54.471 { 00:13:54.471 "name": "BaseBdev3", 00:13:54.471 "uuid": "8b49aec2-fddc-573b-80b6-7fe625e0954a", 00:13:54.471 "is_configured": true, 00:13:54.471 "data_offset": 0, 00:13:54.471 "data_size": 65536 00:13:54.471 }, 00:13:54.471 { 00:13:54.471 "name": "BaseBdev4", 00:13:54.471 "uuid": "5b2ea7f0-93c1-5828-94d6-7413b6ff46fb", 00:13:54.471 "is_configured": true, 00:13:54.471 "data_offset": 0, 00:13:54.471 "data_size": 65536 00:13:54.471 } 00:13:54.471 ] 00:13:54.471 }' 00:13:54.471 03:22:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.471 03:22:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.998 161.00 IOPS, 483.00 MiB/s [2024-11-21T03:22:42.564Z] 03:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:54.998 03:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.998 03:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:54.998 03:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:54.998 03:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.998 03:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.998 03:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.998 03:22:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.998 03:22:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.998 03:22:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.998 03:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.998 "name": "raid_bdev1", 00:13:54.998 "uuid": "82315d0b-d2f1-4b49-9e2b-69f413d5cc30", 00:13:54.998 "strip_size_kb": 0, 00:13:54.998 "state": "online", 00:13:54.998 "raid_level": "raid1", 00:13:54.998 "superblock": false, 00:13:54.998 "num_base_bdevs": 4, 00:13:54.998 "num_base_bdevs_discovered": 3, 00:13:54.998 "num_base_bdevs_operational": 3, 00:13:54.998 "base_bdevs_list": [ 00:13:54.998 { 00:13:54.998 "name": null, 00:13:54.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.998 "is_configured": false, 00:13:54.998 "data_offset": 0, 00:13:54.998 "data_size": 65536 00:13:54.998 }, 00:13:54.998 { 00:13:54.998 "name": "BaseBdev2", 00:13:54.998 "uuid": "22ec6057-52d1-5d50-8c4a-8a2891b4356c", 00:13:54.998 "is_configured": true, 00:13:54.998 "data_offset": 0, 00:13:54.998 "data_size": 65536 00:13:54.998 }, 00:13:54.998 { 00:13:54.998 "name": "BaseBdev3", 00:13:54.998 "uuid": "8b49aec2-fddc-573b-80b6-7fe625e0954a", 00:13:54.998 "is_configured": true, 00:13:54.998 "data_offset": 0, 00:13:54.998 "data_size": 65536 00:13:54.998 }, 00:13:54.998 { 00:13:54.998 "name": "BaseBdev4", 00:13:54.998 "uuid": "5b2ea7f0-93c1-5828-94d6-7413b6ff46fb", 00:13:54.998 "is_configured": true, 00:13:54.998 "data_offset": 0, 00:13:54.998 "data_size": 65536 00:13:54.998 } 00:13:54.998 ] 00:13:54.998 }' 00:13:54.998 03:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.998 03:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:54.998 03:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.998 03:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:54.998 03:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:54.998 03:22:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.998 03:22:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.998 [2024-11-21 03:22:42.432073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:54.998 03:22:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.998 03:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:54.998 [2024-11-21 03:22:42.501487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:13:54.998 [2024-11-21 03:22:42.503527] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:55.275 [2024-11-21 03:22:42.639468] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:55.276 [2024-11-21 03:22:42.640762] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:55.535 [2024-11-21 03:22:42.872562] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:55.794 [2024-11-21 03:22:43.228806] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:55.794 [2024-11-21 03:22:43.230088] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:56.054 162.00 IOPS, 486.00 MiB/s [2024-11-21T03:22:43.620Z] [2024-11-21 03:22:43.447332] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:56.054 [2024-11-21 03:22:43.448003] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:56.054 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:56.054 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.054 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:56.054 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:56.054 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.054 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.054 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.054 03:22:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.054 03:22:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.054 03:22:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.054 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.054 "name": "raid_bdev1", 00:13:56.054 "uuid": "82315d0b-d2f1-4b49-9e2b-69f413d5cc30", 00:13:56.054 "strip_size_kb": 0, 00:13:56.054 "state": "online", 00:13:56.054 "raid_level": "raid1", 00:13:56.054 "superblock": false, 00:13:56.054 "num_base_bdevs": 4, 00:13:56.054 "num_base_bdevs_discovered": 4, 00:13:56.054 "num_base_bdevs_operational": 4, 00:13:56.054 "process": { 00:13:56.054 "type": "rebuild", 00:13:56.054 "target": "spare", 00:13:56.054 "progress": { 00:13:56.054 "blocks": 10240, 00:13:56.054 "percent": 15 00:13:56.054 } 00:13:56.054 }, 00:13:56.054 "base_bdevs_list": [ 00:13:56.054 { 00:13:56.054 "name": "spare", 00:13:56.054 "uuid": "e4eead7f-9bed-5372-ab74-e766b4bf972f", 00:13:56.054 "is_configured": true, 00:13:56.054 "data_offset": 0, 00:13:56.054 "data_size": 65536 00:13:56.054 }, 00:13:56.054 { 00:13:56.054 "name": "BaseBdev2", 00:13:56.054 "uuid": "22ec6057-52d1-5d50-8c4a-8a2891b4356c", 00:13:56.054 "is_configured": true, 00:13:56.054 "data_offset": 0, 00:13:56.054 "data_size": 65536 00:13:56.054 }, 00:13:56.054 { 00:13:56.054 "name": "BaseBdev3", 00:13:56.054 "uuid": "8b49aec2-fddc-573b-80b6-7fe625e0954a", 00:13:56.054 "is_configured": true, 00:13:56.054 "data_offset": 0, 00:13:56.054 "data_size": 65536 00:13:56.054 }, 00:13:56.054 { 00:13:56.054 "name": "BaseBdev4", 00:13:56.054 "uuid": "5b2ea7f0-93c1-5828-94d6-7413b6ff46fb", 00:13:56.054 "is_configured": true, 00:13:56.054 "data_offset": 0, 00:13:56.054 "data_size": 65536 00:13:56.054 } 00:13:56.054 ] 00:13:56.054 }' 00:13:56.054 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.054 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:56.054 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.054 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:56.054 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:56.054 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:56.054 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:56.054 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:56.054 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:56.054 03:22:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.054 03:22:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.313 [2024-11-21 03:22:43.624044] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:56.313 [2024-11-21 03:22:43.768571] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006630 00:13:56.313 [2024-11-21 03:22:43.768618] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000067d0 00:13:56.313 03:22:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.313 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:56.313 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:56.313 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:56.313 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.313 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:56.313 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:56.313 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.313 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.313 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.313 03:22:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.313 03:22:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.313 03:22:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.313 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.313 "name": "raid_bdev1", 00:13:56.313 "uuid": "82315d0b-d2f1-4b49-9e2b-69f413d5cc30", 00:13:56.313 "strip_size_kb": 0, 00:13:56.313 "state": "online", 00:13:56.313 "raid_level": "raid1", 00:13:56.313 "superblock": false, 00:13:56.313 "num_base_bdevs": 4, 00:13:56.313 "num_base_bdevs_discovered": 3, 00:13:56.313 "num_base_bdevs_operational": 3, 00:13:56.313 "process": { 00:13:56.313 "type": "rebuild", 00:13:56.313 "target": "spare", 00:13:56.313 "progress": { 00:13:56.313 "blocks": 12288, 00:13:56.313 "percent": 18 00:13:56.313 } 00:13:56.313 }, 00:13:56.313 "base_bdevs_list": [ 00:13:56.313 { 00:13:56.313 "name": "spare", 00:13:56.313 "uuid": "e4eead7f-9bed-5372-ab74-e766b4bf972f", 00:13:56.313 "is_configured": true, 00:13:56.313 "data_offset": 0, 00:13:56.313 "data_size": 65536 00:13:56.313 }, 00:13:56.313 { 00:13:56.313 "name": null, 00:13:56.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.313 "is_configured": false, 00:13:56.313 "data_offset": 0, 00:13:56.313 "data_size": 65536 00:13:56.313 }, 00:13:56.313 { 00:13:56.313 "name": "BaseBdev3", 00:13:56.313 "uuid": "8b49aec2-fddc-573b-80b6-7fe625e0954a", 00:13:56.313 "is_configured": true, 00:13:56.313 "data_offset": 0, 00:13:56.313 "data_size": 65536 00:13:56.313 }, 00:13:56.313 { 00:13:56.313 "name": "BaseBdev4", 00:13:56.313 "uuid": "5b2ea7f0-93c1-5828-94d6-7413b6ff46fb", 00:13:56.313 "is_configured": true, 00:13:56.313 "data_offset": 0, 00:13:56.313 "data_size": 65536 00:13:56.313 } 00:13:56.313 ] 00:13:56.313 }' 00:13:56.313 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.572 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:56.572 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.572 [2024-11-21 03:22:43.888399] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:56.572 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:56.572 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=398 00:13:56.572 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:56.572 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:56.572 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.572 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:56.572 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:56.572 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.572 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.572 03:22:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.572 03:22:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.572 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.572 03:22:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.572 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.572 "name": "raid_bdev1", 00:13:56.572 "uuid": "82315d0b-d2f1-4b49-9e2b-69f413d5cc30", 00:13:56.572 "strip_size_kb": 0, 00:13:56.572 "state": "online", 00:13:56.572 "raid_level": "raid1", 00:13:56.572 "superblock": false, 00:13:56.572 "num_base_bdevs": 4, 00:13:56.572 "num_base_bdevs_discovered": 3, 00:13:56.572 "num_base_bdevs_operational": 3, 00:13:56.572 "process": { 00:13:56.572 "type": "rebuild", 00:13:56.572 "target": "spare", 00:13:56.572 "progress": { 00:13:56.572 "blocks": 14336, 00:13:56.572 "percent": 21 00:13:56.572 } 00:13:56.572 }, 00:13:56.572 "base_bdevs_list": [ 00:13:56.572 { 00:13:56.572 "name": "spare", 00:13:56.572 "uuid": "e4eead7f-9bed-5372-ab74-e766b4bf972f", 00:13:56.572 "is_configured": true, 00:13:56.572 "data_offset": 0, 00:13:56.572 "data_size": 65536 00:13:56.572 }, 00:13:56.572 { 00:13:56.572 "name": null, 00:13:56.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.572 "is_configured": false, 00:13:56.572 "data_offset": 0, 00:13:56.572 "data_size": 65536 00:13:56.572 }, 00:13:56.572 { 00:13:56.572 "name": "BaseBdev3", 00:13:56.572 "uuid": "8b49aec2-fddc-573b-80b6-7fe625e0954a", 00:13:56.572 "is_configured": true, 00:13:56.572 "data_offset": 0, 00:13:56.572 "data_size": 65536 00:13:56.572 }, 00:13:56.572 { 00:13:56.572 "name": "BaseBdev4", 00:13:56.572 "uuid": "5b2ea7f0-93c1-5828-94d6-7413b6ff46fb", 00:13:56.572 "is_configured": true, 00:13:56.572 "data_offset": 0, 00:13:56.572 "data_size": 65536 00:13:56.572 } 00:13:56.572 ] 00:13:56.572 }' 00:13:56.572 03:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.572 [2024-11-21 03:22:43.997685] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:56.572 03:22:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:56.572 03:22:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.572 03:22:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:56.572 03:22:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:56.832 [2024-11-21 03:22:44.243601] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:57.401 144.75 IOPS, 434.25 MiB/s [2024-11-21T03:22:44.968Z] [2024-11-21 03:22:44.728106] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:57.661 [2024-11-21 03:22:45.060574] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:57.661 [2024-11-21 03:22:45.061116] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:57.661 03:22:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:57.661 03:22:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:57.661 03:22:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.661 03:22:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:57.661 03:22:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:57.661 03:22:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.661 03:22:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.661 03:22:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.661 03:22:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.661 03:22:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.661 03:22:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.661 03:22:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.661 "name": "raid_bdev1", 00:13:57.661 "uuid": "82315d0b-d2f1-4b49-9e2b-69f413d5cc30", 00:13:57.661 "strip_size_kb": 0, 00:13:57.661 "state": "online", 00:13:57.661 "raid_level": "raid1", 00:13:57.661 "superblock": false, 00:13:57.661 "num_base_bdevs": 4, 00:13:57.661 "num_base_bdevs_discovered": 3, 00:13:57.661 "num_base_bdevs_operational": 3, 00:13:57.661 "process": { 00:13:57.661 "type": "rebuild", 00:13:57.661 "target": "spare", 00:13:57.661 "progress": { 00:13:57.661 "blocks": 32768, 00:13:57.661 "percent": 50 00:13:57.661 } 00:13:57.661 }, 00:13:57.661 "base_bdevs_list": [ 00:13:57.661 { 00:13:57.661 "name": "spare", 00:13:57.661 "uuid": "e4eead7f-9bed-5372-ab74-e766b4bf972f", 00:13:57.661 "is_configured": true, 00:13:57.661 "data_offset": 0, 00:13:57.661 "data_size": 65536 00:13:57.661 }, 00:13:57.661 { 00:13:57.661 "name": null, 00:13:57.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.661 "is_configured": false, 00:13:57.661 "data_offset": 0, 00:13:57.661 "data_size": 65536 00:13:57.661 }, 00:13:57.661 { 00:13:57.661 "name": "BaseBdev3", 00:13:57.661 "uuid": "8b49aec2-fddc-573b-80b6-7fe625e0954a", 00:13:57.661 "is_configured": true, 00:13:57.661 "data_offset": 0, 00:13:57.661 "data_size": 65536 00:13:57.661 }, 00:13:57.661 { 00:13:57.661 "name": "BaseBdev4", 00:13:57.661 "uuid": "5b2ea7f0-93c1-5828-94d6-7413b6ff46fb", 00:13:57.661 "is_configured": true, 00:13:57.661 "data_offset": 0, 00:13:57.661 "data_size": 65536 00:13:57.661 } 00:13:57.661 ] 00:13:57.661 }' 00:13:57.661 03:22:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:57.661 03:22:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:57.661 03:22:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.661 [2024-11-21 03:22:45.193707] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:57.661 03:22:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:57.661 03:22:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:58.490 129.80 IOPS, 389.40 MiB/s [2024-11-21T03:22:46.056Z] [2024-11-21 03:22:45.893357] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:58.490 [2024-11-21 03:22:45.893642] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:58.750 03:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:58.750 03:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:58.750 03:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.750 03:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:58.750 03:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:58.750 03:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.750 03:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.750 03:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.750 03:22:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.750 03:22:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.750 03:22:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.750 116.83 IOPS, 350.50 MiB/s [2024-11-21T03:22:46.316Z] 03:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.750 "name": "raid_bdev1", 00:13:58.750 "uuid": "82315d0b-d2f1-4b49-9e2b-69f413d5cc30", 00:13:58.750 "strip_size_kb": 0, 00:13:58.750 "state": "online", 00:13:58.750 "raid_level": "raid1", 00:13:58.750 "superblock": false, 00:13:58.750 "num_base_bdevs": 4, 00:13:58.750 "num_base_bdevs_discovered": 3, 00:13:58.750 "num_base_bdevs_operational": 3, 00:13:58.750 "process": { 00:13:58.750 "type": "rebuild", 00:13:58.750 "target": "spare", 00:13:58.750 "progress": { 00:13:58.750 "blocks": 53248, 00:13:58.750 "percent": 81 00:13:58.750 } 00:13:58.750 }, 00:13:58.750 "base_bdevs_list": [ 00:13:58.750 { 00:13:58.750 "name": "spare", 00:13:58.750 "uuid": "e4eead7f-9bed-5372-ab74-e766b4bf972f", 00:13:58.750 "is_configured": true, 00:13:58.750 "data_offset": 0, 00:13:58.750 "data_size": 65536 00:13:58.750 }, 00:13:58.750 { 00:13:58.750 "name": null, 00:13:58.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.750 "is_configured": false, 00:13:58.750 "data_offset": 0, 00:13:58.750 "data_size": 65536 00:13:58.750 }, 00:13:58.750 { 00:13:58.750 "name": "BaseBdev3", 00:13:58.750 "uuid": "8b49aec2-fddc-573b-80b6-7fe625e0954a", 00:13:58.750 "is_configured": true, 00:13:58.750 "data_offset": 0, 00:13:58.750 "data_size": 65536 00:13:58.750 }, 00:13:58.750 { 00:13:58.750 "name": "BaseBdev4", 00:13:58.750 "uuid": "5b2ea7f0-93c1-5828-94d6-7413b6ff46fb", 00:13:58.750 "is_configured": true, 00:13:58.750 "data_offset": 0, 00:13:58.750 "data_size": 65536 00:13:58.750 } 00:13:58.750 ] 00:13:58.750 }' 00:13:58.750 03:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.010 03:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:59.010 03:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.010 03:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:59.010 03:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:59.010 [2024-11-21 03:22:46.418132] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:59.270 [2024-11-21 03:22:46.634372] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:59.529 [2024-11-21 03:22:47.074344] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:59.789 [2024-11-21 03:22:47.174371] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:59.789 [2024-11-21 03:22:47.176179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.048 104.43 IOPS, 313.29 MiB/s [2024-11-21T03:22:47.614Z] 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:00.048 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:00.048 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.048 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:00.048 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:00.048 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.049 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.049 03:22:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.049 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.049 03:22:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.049 03:22:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.049 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.049 "name": "raid_bdev1", 00:14:00.049 "uuid": "82315d0b-d2f1-4b49-9e2b-69f413d5cc30", 00:14:00.049 "strip_size_kb": 0, 00:14:00.049 "state": "online", 00:14:00.049 "raid_level": "raid1", 00:14:00.049 "superblock": false, 00:14:00.049 "num_base_bdevs": 4, 00:14:00.049 "num_base_bdevs_discovered": 3, 00:14:00.049 "num_base_bdevs_operational": 3, 00:14:00.049 "base_bdevs_list": [ 00:14:00.049 { 00:14:00.049 "name": "spare", 00:14:00.049 "uuid": "e4eead7f-9bed-5372-ab74-e766b4bf972f", 00:14:00.049 "is_configured": true, 00:14:00.049 "data_offset": 0, 00:14:00.049 "data_size": 65536 00:14:00.049 }, 00:14:00.049 { 00:14:00.049 "name": null, 00:14:00.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.049 "is_configured": false, 00:14:00.049 "data_offset": 0, 00:14:00.049 "data_size": 65536 00:14:00.049 }, 00:14:00.049 { 00:14:00.049 "name": "BaseBdev3", 00:14:00.049 "uuid": "8b49aec2-fddc-573b-80b6-7fe625e0954a", 00:14:00.049 "is_configured": true, 00:14:00.049 "data_offset": 0, 00:14:00.049 "data_size": 65536 00:14:00.049 }, 00:14:00.049 { 00:14:00.049 "name": "BaseBdev4", 00:14:00.049 "uuid": "5b2ea7f0-93c1-5828-94d6-7413b6ff46fb", 00:14:00.049 "is_configured": true, 00:14:00.049 "data_offset": 0, 00:14:00.049 "data_size": 65536 00:14:00.049 } 00:14:00.049 ] 00:14:00.049 }' 00:14:00.049 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.049 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:00.049 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.049 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:00.049 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:00.049 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:00.049 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.049 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:00.049 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:00.049 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.049 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.049 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.049 03:22:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.049 03:22:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.049 03:22:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.049 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.049 "name": "raid_bdev1", 00:14:00.049 "uuid": "82315d0b-d2f1-4b49-9e2b-69f413d5cc30", 00:14:00.049 "strip_size_kb": 0, 00:14:00.049 "state": "online", 00:14:00.049 "raid_level": "raid1", 00:14:00.049 "superblock": false, 00:14:00.049 "num_base_bdevs": 4, 00:14:00.049 "num_base_bdevs_discovered": 3, 00:14:00.049 "num_base_bdevs_operational": 3, 00:14:00.049 "base_bdevs_list": [ 00:14:00.049 { 00:14:00.049 "name": "spare", 00:14:00.049 "uuid": "e4eead7f-9bed-5372-ab74-e766b4bf972f", 00:14:00.049 "is_configured": true, 00:14:00.049 "data_offset": 0, 00:14:00.049 "data_size": 65536 00:14:00.049 }, 00:14:00.049 { 00:14:00.049 "name": null, 00:14:00.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.049 "is_configured": false, 00:14:00.049 "data_offset": 0, 00:14:00.049 "data_size": 65536 00:14:00.049 }, 00:14:00.049 { 00:14:00.049 "name": "BaseBdev3", 00:14:00.049 "uuid": "8b49aec2-fddc-573b-80b6-7fe625e0954a", 00:14:00.049 "is_configured": true, 00:14:00.049 "data_offset": 0, 00:14:00.049 "data_size": 65536 00:14:00.049 }, 00:14:00.049 { 00:14:00.049 "name": "BaseBdev4", 00:14:00.049 "uuid": "5b2ea7f0-93c1-5828-94d6-7413b6ff46fb", 00:14:00.049 "is_configured": true, 00:14:00.049 "data_offset": 0, 00:14:00.049 "data_size": 65536 00:14:00.049 } 00:14:00.049 ] 00:14:00.049 }' 00:14:00.049 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.049 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:00.049 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.309 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:00.309 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:00.309 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.309 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.309 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.309 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.309 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.309 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.309 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.309 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.309 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.309 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.309 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.309 03:22:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.309 03:22:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.309 03:22:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.309 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.309 "name": "raid_bdev1", 00:14:00.309 "uuid": "82315d0b-d2f1-4b49-9e2b-69f413d5cc30", 00:14:00.309 "strip_size_kb": 0, 00:14:00.309 "state": "online", 00:14:00.309 "raid_level": "raid1", 00:14:00.309 "superblock": false, 00:14:00.309 "num_base_bdevs": 4, 00:14:00.309 "num_base_bdevs_discovered": 3, 00:14:00.309 "num_base_bdevs_operational": 3, 00:14:00.309 "base_bdevs_list": [ 00:14:00.309 { 00:14:00.309 "name": "spare", 00:14:00.309 "uuid": "e4eead7f-9bed-5372-ab74-e766b4bf972f", 00:14:00.309 "is_configured": true, 00:14:00.309 "data_offset": 0, 00:14:00.309 "data_size": 65536 00:14:00.309 }, 00:14:00.309 { 00:14:00.309 "name": null, 00:14:00.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.309 "is_configured": false, 00:14:00.309 "data_offset": 0, 00:14:00.309 "data_size": 65536 00:14:00.309 }, 00:14:00.309 { 00:14:00.309 "name": "BaseBdev3", 00:14:00.309 "uuid": "8b49aec2-fddc-573b-80b6-7fe625e0954a", 00:14:00.309 "is_configured": true, 00:14:00.309 "data_offset": 0, 00:14:00.309 "data_size": 65536 00:14:00.309 }, 00:14:00.309 { 00:14:00.309 "name": "BaseBdev4", 00:14:00.309 "uuid": "5b2ea7f0-93c1-5828-94d6-7413b6ff46fb", 00:14:00.309 "is_configured": true, 00:14:00.309 "data_offset": 0, 00:14:00.309 "data_size": 65536 00:14:00.309 } 00:14:00.309 ] 00:14:00.309 }' 00:14:00.309 03:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.309 03:22:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.569 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:00.569 03:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.569 03:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.569 [2024-11-21 03:22:48.054428] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:00.569 [2024-11-21 03:22:48.054469] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:00.828 00:14:00.828 Latency(us) 00:14:00.828 [2024-11-21T03:22:48.394Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.829 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:00.829 raid_bdev1 : 7.92 95.83 287.48 0.00 0.00 14161.38 289.18 116528.75 00:14:00.829 [2024-11-21T03:22:48.395Z] =================================================================================================================== 00:14:00.829 [2024-11-21T03:22:48.395Z] Total : 95.83 287.48 0.00 0.00 14161.38 289.18 116528.75 00:14:00.829 [2024-11-21 03:22:48.158209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.829 [2024-11-21 03:22:48.158264] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:00.829 [2024-11-21 03:22:48.158390] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:00.829 [2024-11-21 03:22:48.158403] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:14:00.829 { 00:14:00.829 "results": [ 00:14:00.829 { 00:14:00.829 "job": "raid_bdev1", 00:14:00.829 "core_mask": "0x1", 00:14:00.829 "workload": "randrw", 00:14:00.829 "percentage": 50, 00:14:00.829 "status": "finished", 00:14:00.829 "queue_depth": 2, 00:14:00.829 "io_size": 3145728, 00:14:00.829 "runtime": 7.920685, 00:14:00.829 "iops": 95.82504543483297, 00:14:00.829 "mibps": 287.4751363044989, 00:14:00.829 "io_failed": 0, 00:14:00.829 "io_timeout": 0, 00:14:00.829 "avg_latency_us": 14161.382975461549, 00:14:00.829 "min_latency_us": 289.1798134751155, 00:14:00.829 "max_latency_us": 116528.7544670095 00:14:00.829 } 00:14:00.829 ], 00:14:00.829 "core_count": 1 00:14:00.829 } 00:14:00.829 03:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.829 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.829 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:00.829 03:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.829 03:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.829 03:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.829 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:00.829 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:00.829 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:00.829 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:00.829 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:00.829 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:00.829 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:00.829 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:00.829 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:00.829 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:00.829 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:00.829 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:00.829 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:01.088 /dev/nbd0 00:14:01.088 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:01.088 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:01.088 03:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:01.088 03:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:01.089 03:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:01.089 03:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:01.089 03:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:01.089 03:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:01.089 03:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:01.089 03:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:01.089 03:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:01.089 1+0 records in 00:14:01.089 1+0 records out 00:14:01.089 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372064 s, 11.0 MB/s 00:14:01.089 03:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.089 03:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:01.089 03:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.089 03:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:01.089 03:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:01.089 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:01.089 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:01.089 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:01.089 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:01.089 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:01.089 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:01.089 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:01.089 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:01.089 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:01.089 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:01.089 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:01.089 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:01.089 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:01.089 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:01.089 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:01.089 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:01.089 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:01.349 /dev/nbd1 00:14:01.349 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:01.349 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:01.349 03:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:01.349 03:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:01.349 03:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:01.349 03:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:01.349 03:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:01.349 03:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:01.349 03:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:01.349 03:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:01.349 03:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:01.349 1+0 records in 00:14:01.349 1+0 records out 00:14:01.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234376 s, 17.5 MB/s 00:14:01.349 03:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.349 03:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:01.349 03:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.349 03:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:01.349 03:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:01.349 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:01.349 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:01.349 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:01.349 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:01.349 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:01.349 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:01.349 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:01.349 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:01.349 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.349 03:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:01.608 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:01.608 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:01.608 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:01.608 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.608 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.608 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:01.608 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:01.608 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.608 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:01.608 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:01.608 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:01.608 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:01.608 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:01.608 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:01.608 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:01.608 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:01.608 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:01.608 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:01.608 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:01.608 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:01.868 /dev/nbd1 00:14:01.868 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:01.868 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:01.868 03:22:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:01.868 03:22:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:01.868 03:22:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:01.868 03:22:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:01.868 03:22:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:01.868 03:22:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:01.868 03:22:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:01.868 03:22:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:01.868 03:22:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:01.868 1+0 records in 00:14:01.868 1+0 records out 00:14:01.868 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026273 s, 15.6 MB/s 00:14:01.868 03:22:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.868 03:22:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:01.868 03:22:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.868 03:22:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:01.868 03:22:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:01.868 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:01.868 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:01.868 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:01.868 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:01.868 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:01.868 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:01.868 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:01.868 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:01.868 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.868 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:02.129 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:02.129 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:02.129 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:02.129 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:02.129 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:02.129 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:02.129 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:02.129 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:02.129 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:02.129 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:02.129 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:02.129 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:02.129 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:02.129 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:02.129 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:02.389 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:02.389 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:02.389 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:02.389 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:02.389 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:02.389 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:02.389 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:02.389 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:02.389 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:02.389 03:22:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 91401 00:14:02.389 03:22:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 91401 ']' 00:14:02.389 03:22:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 91401 00:14:02.389 03:22:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:14:02.389 03:22:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:02.389 03:22:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91401 00:14:02.389 03:22:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:02.389 03:22:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:02.390 killing process with pid 91401 00:14:02.390 03:22:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91401' 00:14:02.390 03:22:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 91401 00:14:02.390 Received shutdown signal, test time was about 9.604748 seconds 00:14:02.390 00:14:02.390 Latency(us) 00:14:02.390 [2024-11-21T03:22:49.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.390 [2024-11-21T03:22:49.956Z] =================================================================================================================== 00:14:02.390 [2024-11-21T03:22:49.956Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:02.390 [2024-11-21 03:22:49.838524] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:02.390 03:22:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 91401 00:14:02.390 [2024-11-21 03:22:49.885383] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:02.649 03:22:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:02.649 00:14:02.649 real 0m11.588s 00:14:02.649 user 0m15.010s 00:14:02.649 sys 0m1.801s 00:14:02.649 03:22:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:02.649 03:22:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.649 ************************************ 00:14:02.649 END TEST raid_rebuild_test_io 00:14:02.649 ************************************ 00:14:02.649 03:22:50 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:02.649 03:22:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:02.649 03:22:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:02.649 03:22:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:02.649 ************************************ 00:14:02.649 START TEST raid_rebuild_test_sb_io 00:14:02.649 ************************************ 00:14:02.649 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:14:02.649 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:02.649 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:02.649 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:02.649 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:02.649 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:02.649 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:02.649 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:02.649 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:02.649 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:02.649 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:02.649 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:02.649 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:02.649 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:02.649 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:02.649 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:02.649 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:02.649 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:02.649 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:02.649 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:02.649 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:02.649 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:02.649 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:02.649 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:02.649 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:02.649 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:02.649 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:02.649 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:02.649 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:02.649 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:02.649 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:02.650 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=91794 00:14:02.650 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 91794 00:14:02.650 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:02.650 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 91794 ']' 00:14:02.650 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.650 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:02.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.650 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.650 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:02.650 03:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.909 [2024-11-21 03:22:50.277920] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:14:02.909 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:02.910 Zero copy mechanism will not be used. 00:14:02.910 [2024-11-21 03:22:50.278090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91794 ] 00:14:02.910 [2024-11-21 03:22:50.413477] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:02.910 [2024-11-21 03:22:50.453413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.170 [2024-11-21 03:22:50.483139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.170 [2024-11-21 03:22:50.526374] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:03.170 [2024-11-21 03:22:50.526419] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.739 BaseBdev1_malloc 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.739 [2024-11-21 03:22:51.150672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:03.739 [2024-11-21 03:22:51.150758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.739 [2024-11-21 03:22:51.150790] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:03.739 [2024-11-21 03:22:51.150806] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.739 [2024-11-21 03:22:51.153009] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.739 [2024-11-21 03:22:51.153062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:03.739 BaseBdev1 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.739 BaseBdev2_malloc 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.739 [2024-11-21 03:22:51.179507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:03.739 [2024-11-21 03:22:51.179562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.739 [2024-11-21 03:22:51.179581] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:03.739 [2024-11-21 03:22:51.179591] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.739 [2024-11-21 03:22:51.181642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.739 [2024-11-21 03:22:51.181679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:03.739 BaseBdev2 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.739 BaseBdev3_malloc 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.739 [2024-11-21 03:22:51.208493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:03.739 [2024-11-21 03:22:51.208546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.739 [2024-11-21 03:22:51.208566] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:03.739 [2024-11-21 03:22:51.208576] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.739 [2024-11-21 03:22:51.210779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.739 [2024-11-21 03:22:51.210820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:03.739 BaseBdev3 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.739 BaseBdev4_malloc 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.739 [2024-11-21 03:22:51.247844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:03.739 [2024-11-21 03:22:51.247909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.739 [2024-11-21 03:22:51.247937] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:03.739 [2024-11-21 03:22:51.247950] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.739 [2024-11-21 03:22:51.250402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.739 [2024-11-21 03:22:51.250435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:03.739 BaseBdev4 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.739 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:03.740 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.740 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.740 spare_malloc 00:14:03.740 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.740 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:03.740 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.740 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.740 spare_delay 00:14:03.740 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.740 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:03.740 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.740 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.740 [2024-11-21 03:22:51.288676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:03.740 [2024-11-21 03:22:51.288736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.740 [2024-11-21 03:22:51.288761] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:03.740 [2024-11-21 03:22:51.288774] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.740 [2024-11-21 03:22:51.290870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.740 [2024-11-21 03:22:51.290911] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:03.740 spare 00:14:03.740 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.740 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:03.740 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.740 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.740 [2024-11-21 03:22:51.300771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:04.000 [2024-11-21 03:22:51.302763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:04.000 [2024-11-21 03:22:51.302834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:04.000 [2024-11-21 03:22:51.302881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:04.000 [2024-11-21 03:22:51.303122] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:14:04.000 [2024-11-21 03:22:51.303142] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:04.000 [2024-11-21 03:22:51.303407] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:04.000 [2024-11-21 03:22:51.303587] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:14:04.000 [2024-11-21 03:22:51.303607] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:14:04.000 [2024-11-21 03:22:51.303743] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.000 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.000 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:04.000 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.000 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.000 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.000 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.000 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:04.000 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.000 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.000 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.000 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.000 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.000 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.000 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.000 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.000 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.000 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.000 "name": "raid_bdev1", 00:14:04.000 "uuid": "86e952ad-8a92-4dcf-a76b-7c15a7e4b442", 00:14:04.000 "strip_size_kb": 0, 00:14:04.000 "state": "online", 00:14:04.000 "raid_level": "raid1", 00:14:04.000 "superblock": true, 00:14:04.000 "num_base_bdevs": 4, 00:14:04.000 "num_base_bdevs_discovered": 4, 00:14:04.000 "num_base_bdevs_operational": 4, 00:14:04.000 "base_bdevs_list": [ 00:14:04.000 { 00:14:04.000 "name": "BaseBdev1", 00:14:04.000 "uuid": "167af8b1-a38f-5ced-b9cc-96841cb81319", 00:14:04.000 "is_configured": true, 00:14:04.000 "data_offset": 2048, 00:14:04.000 "data_size": 63488 00:14:04.000 }, 00:14:04.000 { 00:14:04.000 "name": "BaseBdev2", 00:14:04.000 "uuid": "084bf0b3-eb89-527d-ae6e-2726a5d6a267", 00:14:04.000 "is_configured": true, 00:14:04.000 "data_offset": 2048, 00:14:04.000 "data_size": 63488 00:14:04.000 }, 00:14:04.000 { 00:14:04.000 "name": "BaseBdev3", 00:14:04.000 "uuid": "b29a23da-0464-5c76-b0b8-ce0b6766afca", 00:14:04.000 "is_configured": true, 00:14:04.000 "data_offset": 2048, 00:14:04.000 "data_size": 63488 00:14:04.000 }, 00:14:04.000 { 00:14:04.000 "name": "BaseBdev4", 00:14:04.000 "uuid": "a26e5307-89f6-5681-8488-bea4fbb6a2f0", 00:14:04.000 "is_configured": true, 00:14:04.000 "data_offset": 2048, 00:14:04.000 "data_size": 63488 00:14:04.000 } 00:14:04.000 ] 00:14:04.000 }' 00:14:04.000 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.000 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.260 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:04.260 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:04.260 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.260 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.260 [2024-11-21 03:22:51.777197] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:04.260 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.260 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:04.260 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.260 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.260 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.260 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:04.260 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.519 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:04.519 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:04.519 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:04.519 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:04.519 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.519 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.519 [2024-11-21 03:22:51.860882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:04.519 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.519 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:04.519 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.519 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.519 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.519 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.519 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.519 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.519 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.519 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.519 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.519 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.519 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.519 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.519 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.519 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.519 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.519 "name": "raid_bdev1", 00:14:04.519 "uuid": "86e952ad-8a92-4dcf-a76b-7c15a7e4b442", 00:14:04.519 "strip_size_kb": 0, 00:14:04.519 "state": "online", 00:14:04.519 "raid_level": "raid1", 00:14:04.519 "superblock": true, 00:14:04.519 "num_base_bdevs": 4, 00:14:04.519 "num_base_bdevs_discovered": 3, 00:14:04.519 "num_base_bdevs_operational": 3, 00:14:04.519 "base_bdevs_list": [ 00:14:04.519 { 00:14:04.519 "name": null, 00:14:04.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.519 "is_configured": false, 00:14:04.519 "data_offset": 0, 00:14:04.519 "data_size": 63488 00:14:04.519 }, 00:14:04.519 { 00:14:04.519 "name": "BaseBdev2", 00:14:04.519 "uuid": "084bf0b3-eb89-527d-ae6e-2726a5d6a267", 00:14:04.519 "is_configured": true, 00:14:04.519 "data_offset": 2048, 00:14:04.519 "data_size": 63488 00:14:04.519 }, 00:14:04.519 { 00:14:04.520 "name": "BaseBdev3", 00:14:04.520 "uuid": "b29a23da-0464-5c76-b0b8-ce0b6766afca", 00:14:04.520 "is_configured": true, 00:14:04.520 "data_offset": 2048, 00:14:04.520 "data_size": 63488 00:14:04.520 }, 00:14:04.520 { 00:14:04.520 "name": "BaseBdev4", 00:14:04.520 "uuid": "a26e5307-89f6-5681-8488-bea4fbb6a2f0", 00:14:04.520 "is_configured": true, 00:14:04.520 "data_offset": 2048, 00:14:04.520 "data_size": 63488 00:14:04.520 } 00:14:04.520 ] 00:14:04.520 }' 00:14:04.520 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.520 03:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.520 [2024-11-21 03:22:51.954943] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:14:04.520 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:04.520 Zero copy mechanism will not be used. 00:14:04.520 Running I/O for 60 seconds... 00:14:04.779 03:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:04.779 03:22:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.779 03:22:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.779 [2024-11-21 03:22:52.339218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:05.040 03:22:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.040 03:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:05.040 [2024-11-21 03:22:52.390856] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:14:05.040 [2024-11-21 03:22:52.392975] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:05.040 [2024-11-21 03:22:52.507839] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:05.040 [2024-11-21 03:22:52.508429] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:05.299 [2024-11-21 03:22:52.726256] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:05.299 [2024-11-21 03:22:52.726571] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:05.558 170.00 IOPS, 510.00 MiB/s [2024-11-21T03:22:53.124Z] [2024-11-21 03:22:52.990246] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:05.818 [2024-11-21 03:22:53.122315] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:05.818 [2024-11-21 03:22:53.356843] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:05.818 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.818 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.818 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.818 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.818 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.818 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.818 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.818 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.818 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.078 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.078 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.078 "name": "raid_bdev1", 00:14:06.078 "uuid": "86e952ad-8a92-4dcf-a76b-7c15a7e4b442", 00:14:06.078 "strip_size_kb": 0, 00:14:06.078 "state": "online", 00:14:06.078 "raid_level": "raid1", 00:14:06.078 "superblock": true, 00:14:06.078 "num_base_bdevs": 4, 00:14:06.079 "num_base_bdevs_discovered": 4, 00:14:06.079 "num_base_bdevs_operational": 4, 00:14:06.079 "process": { 00:14:06.079 "type": "rebuild", 00:14:06.079 "target": "spare", 00:14:06.079 "progress": { 00:14:06.079 "blocks": 14336, 00:14:06.079 "percent": 22 00:14:06.079 } 00:14:06.079 }, 00:14:06.079 "base_bdevs_list": [ 00:14:06.079 { 00:14:06.079 "name": "spare", 00:14:06.079 "uuid": "92eea285-3604-541f-a12f-f658e7fab47f", 00:14:06.079 "is_configured": true, 00:14:06.079 "data_offset": 2048, 00:14:06.079 "data_size": 63488 00:14:06.079 }, 00:14:06.079 { 00:14:06.079 "name": "BaseBdev2", 00:14:06.079 "uuid": "084bf0b3-eb89-527d-ae6e-2726a5d6a267", 00:14:06.079 "is_configured": true, 00:14:06.079 "data_offset": 2048, 00:14:06.079 "data_size": 63488 00:14:06.079 }, 00:14:06.079 { 00:14:06.079 "name": "BaseBdev3", 00:14:06.079 "uuid": "b29a23da-0464-5c76-b0b8-ce0b6766afca", 00:14:06.079 "is_configured": true, 00:14:06.079 "data_offset": 2048, 00:14:06.079 "data_size": 63488 00:14:06.079 }, 00:14:06.079 { 00:14:06.079 "name": "BaseBdev4", 00:14:06.079 "uuid": "a26e5307-89f6-5681-8488-bea4fbb6a2f0", 00:14:06.079 "is_configured": true, 00:14:06.079 "data_offset": 2048, 00:14:06.079 "data_size": 63488 00:14:06.079 } 00:14:06.079 ] 00:14:06.079 }' 00:14:06.079 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.079 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:06.079 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.079 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:06.079 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:06.079 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.079 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.079 [2024-11-21 03:22:53.535469] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:06.079 [2024-11-21 03:22:53.559595] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:06.079 [2024-11-21 03:22:53.619450] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:06.079 [2024-11-21 03:22:53.628591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.079 [2024-11-21 03:22:53.628646] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:06.079 [2024-11-21 03:22:53.628676] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:06.338 [2024-11-21 03:22:53.653023] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006630 00:14:06.338 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.338 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:06.338 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.338 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.338 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.338 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.338 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.338 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.338 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.338 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.338 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.338 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.338 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.338 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.338 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.338 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.338 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.338 "name": "raid_bdev1", 00:14:06.338 "uuid": "86e952ad-8a92-4dcf-a76b-7c15a7e4b442", 00:14:06.338 "strip_size_kb": 0, 00:14:06.338 "state": "online", 00:14:06.338 "raid_level": "raid1", 00:14:06.338 "superblock": true, 00:14:06.338 "num_base_bdevs": 4, 00:14:06.338 "num_base_bdevs_discovered": 3, 00:14:06.338 "num_base_bdevs_operational": 3, 00:14:06.338 "base_bdevs_list": [ 00:14:06.338 { 00:14:06.338 "name": null, 00:14:06.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.338 "is_configured": false, 00:14:06.338 "data_offset": 0, 00:14:06.338 "data_size": 63488 00:14:06.338 }, 00:14:06.338 { 00:14:06.338 "name": "BaseBdev2", 00:14:06.338 "uuid": "084bf0b3-eb89-527d-ae6e-2726a5d6a267", 00:14:06.338 "is_configured": true, 00:14:06.338 "data_offset": 2048, 00:14:06.338 "data_size": 63488 00:14:06.338 }, 00:14:06.338 { 00:14:06.338 "name": "BaseBdev3", 00:14:06.338 "uuid": "b29a23da-0464-5c76-b0b8-ce0b6766afca", 00:14:06.338 "is_configured": true, 00:14:06.338 "data_offset": 2048, 00:14:06.338 "data_size": 63488 00:14:06.338 }, 00:14:06.338 { 00:14:06.338 "name": "BaseBdev4", 00:14:06.338 "uuid": "a26e5307-89f6-5681-8488-bea4fbb6a2f0", 00:14:06.338 "is_configured": true, 00:14:06.338 "data_offset": 2048, 00:14:06.338 "data_size": 63488 00:14:06.338 } 00:14:06.338 ] 00:14:06.338 }' 00:14:06.338 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.338 03:22:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.597 157.50 IOPS, 472.50 MiB/s [2024-11-21T03:22:54.163Z] 03:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:06.597 03:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.597 03:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:06.597 03:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:06.597 03:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.597 03:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.597 03:22:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.597 03:22:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.597 03:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.856 03:22:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.856 03:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.856 "name": "raid_bdev1", 00:14:06.856 "uuid": "86e952ad-8a92-4dcf-a76b-7c15a7e4b442", 00:14:06.856 "strip_size_kb": 0, 00:14:06.856 "state": "online", 00:14:06.856 "raid_level": "raid1", 00:14:06.856 "superblock": true, 00:14:06.856 "num_base_bdevs": 4, 00:14:06.856 "num_base_bdevs_discovered": 3, 00:14:06.856 "num_base_bdevs_operational": 3, 00:14:06.856 "base_bdevs_list": [ 00:14:06.856 { 00:14:06.856 "name": null, 00:14:06.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.856 "is_configured": false, 00:14:06.856 "data_offset": 0, 00:14:06.856 "data_size": 63488 00:14:06.856 }, 00:14:06.856 { 00:14:06.856 "name": "BaseBdev2", 00:14:06.856 "uuid": "084bf0b3-eb89-527d-ae6e-2726a5d6a267", 00:14:06.856 "is_configured": true, 00:14:06.856 "data_offset": 2048, 00:14:06.856 "data_size": 63488 00:14:06.856 }, 00:14:06.856 { 00:14:06.856 "name": "BaseBdev3", 00:14:06.856 "uuid": "b29a23da-0464-5c76-b0b8-ce0b6766afca", 00:14:06.856 "is_configured": true, 00:14:06.857 "data_offset": 2048, 00:14:06.857 "data_size": 63488 00:14:06.857 }, 00:14:06.857 { 00:14:06.857 "name": "BaseBdev4", 00:14:06.857 "uuid": "a26e5307-89f6-5681-8488-bea4fbb6a2f0", 00:14:06.857 "is_configured": true, 00:14:06.857 "data_offset": 2048, 00:14:06.857 "data_size": 63488 00:14:06.857 } 00:14:06.857 ] 00:14:06.857 }' 00:14:06.857 03:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.857 03:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:06.857 03:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.857 03:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:06.857 03:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:06.857 03:22:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.857 03:22:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.857 [2024-11-21 03:22:54.288215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:06.857 03:22:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.857 03:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:06.857 [2024-11-21 03:22:54.340758] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:14:06.857 [2024-11-21 03:22:54.342877] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:07.117 [2024-11-21 03:22:54.451917] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:07.117 [2024-11-21 03:22:54.452436] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:07.117 [2024-11-21 03:22:54.576647] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:07.117 [2024-11-21 03:22:54.577350] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:07.377 [2024-11-21 03:22:54.928385] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:07.377 [2024-11-21 03:22:54.935112] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:07.637 158.33 IOPS, 475.00 MiB/s [2024-11-21T03:22:55.203Z] [2024-11-21 03:22:55.146454] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:07.637 [2024-11-21 03:22:55.147166] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:07.896 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:07.896 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.896 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:07.896 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:07.896 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.896 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.896 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.897 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.897 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.897 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.897 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.897 "name": "raid_bdev1", 00:14:07.897 "uuid": "86e952ad-8a92-4dcf-a76b-7c15a7e4b442", 00:14:07.897 "strip_size_kb": 0, 00:14:07.897 "state": "online", 00:14:07.897 "raid_level": "raid1", 00:14:07.897 "superblock": true, 00:14:07.897 "num_base_bdevs": 4, 00:14:07.897 "num_base_bdevs_discovered": 4, 00:14:07.897 "num_base_bdevs_operational": 4, 00:14:07.897 "process": { 00:14:07.897 "type": "rebuild", 00:14:07.897 "target": "spare", 00:14:07.897 "progress": { 00:14:07.897 "blocks": 10240, 00:14:07.897 "percent": 16 00:14:07.897 } 00:14:07.897 }, 00:14:07.897 "base_bdevs_list": [ 00:14:07.897 { 00:14:07.897 "name": "spare", 00:14:07.897 "uuid": "92eea285-3604-541f-a12f-f658e7fab47f", 00:14:07.897 "is_configured": true, 00:14:07.897 "data_offset": 2048, 00:14:07.897 "data_size": 63488 00:14:07.897 }, 00:14:07.897 { 00:14:07.897 "name": "BaseBdev2", 00:14:07.897 "uuid": "084bf0b3-eb89-527d-ae6e-2726a5d6a267", 00:14:07.897 "is_configured": true, 00:14:07.897 "data_offset": 2048, 00:14:07.897 "data_size": 63488 00:14:07.897 }, 00:14:07.897 { 00:14:07.897 "name": "BaseBdev3", 00:14:07.897 "uuid": "b29a23da-0464-5c76-b0b8-ce0b6766afca", 00:14:07.897 "is_configured": true, 00:14:07.897 "data_offset": 2048, 00:14:07.897 "data_size": 63488 00:14:07.897 }, 00:14:07.897 { 00:14:07.897 "name": "BaseBdev4", 00:14:07.897 "uuid": "a26e5307-89f6-5681-8488-bea4fbb6a2f0", 00:14:07.897 "is_configured": true, 00:14:07.897 "data_offset": 2048, 00:14:07.897 "data_size": 63488 00:14:07.897 } 00:14:07.897 ] 00:14:07.897 }' 00:14:07.897 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.897 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:07.897 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.157 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:08.157 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:08.157 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:08.157 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:08.157 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:08.157 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:08.157 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:08.157 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:08.157 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.157 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.157 [2024-11-21 03:22:55.477307] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:08.157 [2024-11-21 03:22:55.702412] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006630 00:14:08.157 [2024-11-21 03:22:55.702461] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000067d0 00:14:08.157 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.157 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:08.157 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:08.157 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:08.157 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.157 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:08.157 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:08.157 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.417 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.417 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.417 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.417 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.417 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.417 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.417 "name": "raid_bdev1", 00:14:08.417 "uuid": "86e952ad-8a92-4dcf-a76b-7c15a7e4b442", 00:14:08.417 "strip_size_kb": 0, 00:14:08.417 "state": "online", 00:14:08.417 "raid_level": "raid1", 00:14:08.417 "superblock": true, 00:14:08.417 "num_base_bdevs": 4, 00:14:08.417 "num_base_bdevs_discovered": 3, 00:14:08.417 "num_base_bdevs_operational": 3, 00:14:08.417 "process": { 00:14:08.417 "type": "rebuild", 00:14:08.417 "target": "spare", 00:14:08.417 "progress": { 00:14:08.417 "blocks": 14336, 00:14:08.417 "percent": 22 00:14:08.417 } 00:14:08.417 }, 00:14:08.417 "base_bdevs_list": [ 00:14:08.417 { 00:14:08.417 "name": "spare", 00:14:08.417 "uuid": "92eea285-3604-541f-a12f-f658e7fab47f", 00:14:08.417 "is_configured": true, 00:14:08.417 "data_offset": 2048, 00:14:08.417 "data_size": 63488 00:14:08.417 }, 00:14:08.417 { 00:14:08.417 "name": null, 00:14:08.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.417 "is_configured": false, 00:14:08.417 "data_offset": 0, 00:14:08.417 "data_size": 63488 00:14:08.417 }, 00:14:08.417 { 00:14:08.417 "name": "BaseBdev3", 00:14:08.417 "uuid": "b29a23da-0464-5c76-b0b8-ce0b6766afca", 00:14:08.417 "is_configured": true, 00:14:08.417 "data_offset": 2048, 00:14:08.417 "data_size": 63488 00:14:08.417 }, 00:14:08.417 { 00:14:08.417 "name": "BaseBdev4", 00:14:08.417 "uuid": "a26e5307-89f6-5681-8488-bea4fbb6a2f0", 00:14:08.417 "is_configured": true, 00:14:08.417 "data_offset": 2048, 00:14:08.417 "data_size": 63488 00:14:08.417 } 00:14:08.417 ] 00:14:08.417 }' 00:14:08.417 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.417 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:08.417 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.417 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:08.417 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=410 00:14:08.417 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:08.417 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:08.417 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.417 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:08.417 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:08.417 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.417 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.417 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.417 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.417 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.417 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.417 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.417 "name": "raid_bdev1", 00:14:08.417 "uuid": "86e952ad-8a92-4dcf-a76b-7c15a7e4b442", 00:14:08.417 "strip_size_kb": 0, 00:14:08.417 "state": "online", 00:14:08.417 "raid_level": "raid1", 00:14:08.417 "superblock": true, 00:14:08.417 "num_base_bdevs": 4, 00:14:08.417 "num_base_bdevs_discovered": 3, 00:14:08.417 "num_base_bdevs_operational": 3, 00:14:08.417 "process": { 00:14:08.417 "type": "rebuild", 00:14:08.417 "target": "spare", 00:14:08.417 "progress": { 00:14:08.417 "blocks": 16384, 00:14:08.417 "percent": 25 00:14:08.417 } 00:14:08.417 }, 00:14:08.417 "base_bdevs_list": [ 00:14:08.417 { 00:14:08.417 "name": "spare", 00:14:08.417 "uuid": "92eea285-3604-541f-a12f-f658e7fab47f", 00:14:08.417 "is_configured": true, 00:14:08.417 "data_offset": 2048, 00:14:08.417 "data_size": 63488 00:14:08.417 }, 00:14:08.417 { 00:14:08.417 "name": null, 00:14:08.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.417 "is_configured": false, 00:14:08.417 "data_offset": 0, 00:14:08.417 "data_size": 63488 00:14:08.417 }, 00:14:08.417 { 00:14:08.417 "name": "BaseBdev3", 00:14:08.417 "uuid": "b29a23da-0464-5c76-b0b8-ce0b6766afca", 00:14:08.417 "is_configured": true, 00:14:08.417 "data_offset": 2048, 00:14:08.417 "data_size": 63488 00:14:08.417 }, 00:14:08.417 { 00:14:08.417 "name": "BaseBdev4", 00:14:08.417 "uuid": "a26e5307-89f6-5681-8488-bea4fbb6a2f0", 00:14:08.417 "is_configured": true, 00:14:08.417 "data_offset": 2048, 00:14:08.417 "data_size": 63488 00:14:08.417 } 00:14:08.417 ] 00:14:08.417 }' 00:14:08.417 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.417 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:08.417 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.417 131.00 IOPS, 393.00 MiB/s [2024-11-21T03:22:55.983Z] 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:08.417 03:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:08.676 [2024-11-21 03:22:56.076514] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:08.676 [2024-11-21 03:22:56.077072] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:08.935 [2024-11-21 03:22:56.293167] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:09.195 [2024-11-21 03:22:56.610382] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:09.454 [2024-11-21 03:22:56.826471] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:09.454 113.20 IOPS, 339.60 MiB/s [2024-11-21T03:22:57.020Z] 03:22:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:09.454 03:22:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:09.454 03:22:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.454 03:22:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:09.455 03:22:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:09.455 03:22:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.455 03:22:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.455 03:22:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.455 03:22:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.455 03:22:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.455 03:22:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.714 03:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.714 "name": "raid_bdev1", 00:14:09.714 "uuid": "86e952ad-8a92-4dcf-a76b-7c15a7e4b442", 00:14:09.714 "strip_size_kb": 0, 00:14:09.714 "state": "online", 00:14:09.714 "raid_level": "raid1", 00:14:09.714 "superblock": true, 00:14:09.714 "num_base_bdevs": 4, 00:14:09.714 "num_base_bdevs_discovered": 3, 00:14:09.714 "num_base_bdevs_operational": 3, 00:14:09.714 "process": { 00:14:09.714 "type": "rebuild", 00:14:09.714 "target": "spare", 00:14:09.714 "progress": { 00:14:09.714 "blocks": 30720, 00:14:09.714 "percent": 48 00:14:09.714 } 00:14:09.714 }, 00:14:09.714 "base_bdevs_list": [ 00:14:09.714 { 00:14:09.714 "name": "spare", 00:14:09.714 "uuid": "92eea285-3604-541f-a12f-f658e7fab47f", 00:14:09.714 "is_configured": true, 00:14:09.714 "data_offset": 2048, 00:14:09.714 "data_size": 63488 00:14:09.714 }, 00:14:09.714 { 00:14:09.714 "name": null, 00:14:09.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.714 "is_configured": false, 00:14:09.714 "data_offset": 0, 00:14:09.714 "data_size": 63488 00:14:09.714 }, 00:14:09.714 { 00:14:09.714 "name": "BaseBdev3", 00:14:09.714 "uuid": "b29a23da-0464-5c76-b0b8-ce0b6766afca", 00:14:09.714 "is_configured": true, 00:14:09.714 "data_offset": 2048, 00:14:09.714 "data_size": 63488 00:14:09.714 }, 00:14:09.714 { 00:14:09.714 "name": "BaseBdev4", 00:14:09.714 "uuid": "a26e5307-89f6-5681-8488-bea4fbb6a2f0", 00:14:09.714 "is_configured": true, 00:14:09.714 "data_offset": 2048, 00:14:09.715 "data_size": 63488 00:14:09.715 } 00:14:09.715 ] 00:14:09.715 }' 00:14:09.715 03:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.715 [2024-11-21 03:22:57.057457] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:09.715 03:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:09.715 03:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.715 03:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.715 03:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:09.974 [2024-11-21 03:22:57.283884] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:10.257 [2024-11-21 03:22:57.621052] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:10.547 [2024-11-21 03:22:57.846779] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:10.808 100.33 IOPS, 301.00 MiB/s [2024-11-21T03:22:58.374Z] 03:22:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:10.808 03:22:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:10.808 03:22:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.808 03:22:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:10.808 03:22:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:10.808 03:22:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.808 03:22:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.808 03:22:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.808 03:22:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.808 03:22:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.808 03:22:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.808 [2024-11-21 03:22:58.193358] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:10.808 03:22:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.808 "name": "raid_bdev1", 00:14:10.808 "uuid": "86e952ad-8a92-4dcf-a76b-7c15a7e4b442", 00:14:10.808 "strip_size_kb": 0, 00:14:10.808 "state": "online", 00:14:10.808 "raid_level": "raid1", 00:14:10.808 "superblock": true, 00:14:10.808 "num_base_bdevs": 4, 00:14:10.808 "num_base_bdevs_discovered": 3, 00:14:10.808 "num_base_bdevs_operational": 3, 00:14:10.808 "process": { 00:14:10.808 "type": "rebuild", 00:14:10.808 "target": "spare", 00:14:10.808 "progress": { 00:14:10.808 "blocks": 43008, 00:14:10.808 "percent": 67 00:14:10.808 } 00:14:10.808 }, 00:14:10.808 "base_bdevs_list": [ 00:14:10.808 { 00:14:10.808 "name": "spare", 00:14:10.808 "uuid": "92eea285-3604-541f-a12f-f658e7fab47f", 00:14:10.808 "is_configured": true, 00:14:10.808 "data_offset": 2048, 00:14:10.808 "data_size": 63488 00:14:10.808 }, 00:14:10.808 { 00:14:10.808 "name": null, 00:14:10.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.808 "is_configured": false, 00:14:10.808 "data_offset": 0, 00:14:10.808 "data_size": 63488 00:14:10.808 }, 00:14:10.808 { 00:14:10.808 "name": "BaseBdev3", 00:14:10.808 "uuid": "b29a23da-0464-5c76-b0b8-ce0b6766afca", 00:14:10.808 "is_configured": true, 00:14:10.808 "data_offset": 2048, 00:14:10.808 "data_size": 63488 00:14:10.808 }, 00:14:10.808 { 00:14:10.808 "name": "BaseBdev4", 00:14:10.808 "uuid": "a26e5307-89f6-5681-8488-bea4fbb6a2f0", 00:14:10.808 "is_configured": true, 00:14:10.808 "data_offset": 2048, 00:14:10.808 "data_size": 63488 00:14:10.808 } 00:14:10.808 ] 00:14:10.808 }' 00:14:10.808 03:22:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.808 03:22:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:10.808 03:22:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.808 [2024-11-21 03:22:58.301377] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:10.808 03:22:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:10.808 03:22:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:11.068 [2024-11-21 03:22:58.531089] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:11.638 92.86 IOPS, 278.57 MiB/s [2024-11-21T03:22:59.204Z] [2024-11-21 03:22:59.196374] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:11.896 [2024-11-21 03:22:59.301761] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:11.897 [2024-11-21 03:22:59.305416] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.897 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:11.897 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:11.897 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.897 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:11.897 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:11.897 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.897 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.897 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.897 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.897 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.897 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.897 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.897 "name": "raid_bdev1", 00:14:11.897 "uuid": "86e952ad-8a92-4dcf-a76b-7c15a7e4b442", 00:14:11.897 "strip_size_kb": 0, 00:14:11.897 "state": "online", 00:14:11.897 "raid_level": "raid1", 00:14:11.897 "superblock": true, 00:14:11.897 "num_base_bdevs": 4, 00:14:11.897 "num_base_bdevs_discovered": 3, 00:14:11.897 "num_base_bdevs_operational": 3, 00:14:11.897 "base_bdevs_list": [ 00:14:11.897 { 00:14:11.897 "name": "spare", 00:14:11.897 "uuid": "92eea285-3604-541f-a12f-f658e7fab47f", 00:14:11.897 "is_configured": true, 00:14:11.897 "data_offset": 2048, 00:14:11.897 "data_size": 63488 00:14:11.897 }, 00:14:11.897 { 00:14:11.897 "name": null, 00:14:11.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.897 "is_configured": false, 00:14:11.897 "data_offset": 0, 00:14:11.897 "data_size": 63488 00:14:11.897 }, 00:14:11.897 { 00:14:11.897 "name": "BaseBdev3", 00:14:11.897 "uuid": "b29a23da-0464-5c76-b0b8-ce0b6766afca", 00:14:11.897 "is_configured": true, 00:14:11.897 "data_offset": 2048, 00:14:11.897 "data_size": 63488 00:14:11.897 }, 00:14:11.897 { 00:14:11.897 "name": "BaseBdev4", 00:14:11.897 "uuid": "a26e5307-89f6-5681-8488-bea4fbb6a2f0", 00:14:11.897 "is_configured": true, 00:14:11.897 "data_offset": 2048, 00:14:11.897 "data_size": 63488 00:14:11.897 } 00:14:11.897 ] 00:14:11.897 }' 00:14:11.897 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.897 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:11.897 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.156 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:12.156 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:12.156 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:12.156 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.157 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:12.157 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:12.157 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.157 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.157 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.157 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.157 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.157 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.157 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.157 "name": "raid_bdev1", 00:14:12.157 "uuid": "86e952ad-8a92-4dcf-a76b-7c15a7e4b442", 00:14:12.157 "strip_size_kb": 0, 00:14:12.157 "state": "online", 00:14:12.157 "raid_level": "raid1", 00:14:12.157 "superblock": true, 00:14:12.157 "num_base_bdevs": 4, 00:14:12.157 "num_base_bdevs_discovered": 3, 00:14:12.157 "num_base_bdevs_operational": 3, 00:14:12.157 "base_bdevs_list": [ 00:14:12.157 { 00:14:12.157 "name": "spare", 00:14:12.157 "uuid": "92eea285-3604-541f-a12f-f658e7fab47f", 00:14:12.157 "is_configured": true, 00:14:12.157 "data_offset": 2048, 00:14:12.157 "data_size": 63488 00:14:12.157 }, 00:14:12.157 { 00:14:12.157 "name": null, 00:14:12.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.157 "is_configured": false, 00:14:12.157 "data_offset": 0, 00:14:12.157 "data_size": 63488 00:14:12.157 }, 00:14:12.157 { 00:14:12.157 "name": "BaseBdev3", 00:14:12.157 "uuid": "b29a23da-0464-5c76-b0b8-ce0b6766afca", 00:14:12.157 "is_configured": true, 00:14:12.157 "data_offset": 2048, 00:14:12.157 "data_size": 63488 00:14:12.157 }, 00:14:12.157 { 00:14:12.157 "name": "BaseBdev4", 00:14:12.157 "uuid": "a26e5307-89f6-5681-8488-bea4fbb6a2f0", 00:14:12.157 "is_configured": true, 00:14:12.157 "data_offset": 2048, 00:14:12.157 "data_size": 63488 00:14:12.157 } 00:14:12.157 ] 00:14:12.157 }' 00:14:12.157 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.157 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:12.157 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.157 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:12.157 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:12.157 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.157 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.157 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.157 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.157 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:12.157 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.157 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.157 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.157 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.157 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.157 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.157 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.157 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.157 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.157 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.157 "name": "raid_bdev1", 00:14:12.157 "uuid": "86e952ad-8a92-4dcf-a76b-7c15a7e4b442", 00:14:12.157 "strip_size_kb": 0, 00:14:12.157 "state": "online", 00:14:12.157 "raid_level": "raid1", 00:14:12.157 "superblock": true, 00:14:12.157 "num_base_bdevs": 4, 00:14:12.157 "num_base_bdevs_discovered": 3, 00:14:12.157 "num_base_bdevs_operational": 3, 00:14:12.157 "base_bdevs_list": [ 00:14:12.157 { 00:14:12.157 "name": "spare", 00:14:12.157 "uuid": "92eea285-3604-541f-a12f-f658e7fab47f", 00:14:12.157 "is_configured": true, 00:14:12.157 "data_offset": 2048, 00:14:12.157 "data_size": 63488 00:14:12.157 }, 00:14:12.157 { 00:14:12.157 "name": null, 00:14:12.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.157 "is_configured": false, 00:14:12.157 "data_offset": 0, 00:14:12.157 "data_size": 63488 00:14:12.157 }, 00:14:12.157 { 00:14:12.157 "name": "BaseBdev3", 00:14:12.157 "uuid": "b29a23da-0464-5c76-b0b8-ce0b6766afca", 00:14:12.157 "is_configured": true, 00:14:12.157 "data_offset": 2048, 00:14:12.157 "data_size": 63488 00:14:12.157 }, 00:14:12.157 { 00:14:12.157 "name": "BaseBdev4", 00:14:12.157 "uuid": "a26e5307-89f6-5681-8488-bea4fbb6a2f0", 00:14:12.157 "is_configured": true, 00:14:12.157 "data_offset": 2048, 00:14:12.157 "data_size": 63488 00:14:12.157 } 00:14:12.157 ] 00:14:12.157 }' 00:14:12.157 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.157 03:22:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.676 86.00 IOPS, 258.00 MiB/s [2024-11-21T03:23:00.242Z] 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:12.676 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.676 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.676 [2024-11-21 03:23:00.077364] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:12.676 [2024-11-21 03:23:00.077401] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:12.676 00:14:12.676 Latency(us) 00:14:12.676 [2024-11-21T03:23:00.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.676 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:12.676 raid_bdev1 : 8.21 84.27 252.82 0.00 0.00 15759.65 315.96 117899.68 00:14:12.676 [2024-11-21T03:23:00.242Z] =================================================================================================================== 00:14:12.676 [2024-11-21T03:23:00.242Z] Total : 84.27 252.82 0.00 0.00 15759.65 315.96 117899.68 00:14:12.676 [2024-11-21 03:23:00.172892] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.676 [2024-11-21 03:23:00.172945] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:12.676 [2024-11-21 03:23:00.173055] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:12.677 [2024-11-21 03:23:00.173079] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:14:12.677 { 00:14:12.677 "results": [ 00:14:12.677 { 00:14:12.677 "job": "raid_bdev1", 00:14:12.677 "core_mask": "0x1", 00:14:12.677 "workload": "randrw", 00:14:12.677 "percentage": 50, 00:14:12.677 "status": "finished", 00:14:12.677 "queue_depth": 2, 00:14:12.677 "io_size": 3145728, 00:14:12.677 "runtime": 8.211478, 00:14:12.677 "iops": 84.27228326009033, 00:14:12.677 "mibps": 252.816849780271, 00:14:12.677 "io_failed": 0, 00:14:12.677 "io_timeout": 0, 00:14:12.677 "avg_latency_us": 15759.645914081264, 00:14:12.677 "min_latency_us": 315.95572213021876, 00:14:12.677 "max_latency_us": 117899.6809901508 00:14:12.677 } 00:14:12.677 ], 00:14:12.677 "core_count": 1 00:14:12.677 } 00:14:12.677 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.677 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.677 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.677 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:12.677 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.677 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.677 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:12.677 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:12.677 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:12.677 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:12.677 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:12.677 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:12.677 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:12.677 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:12.677 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:12.677 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:12.677 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:12.677 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:12.677 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:12.937 /dev/nbd0 00:14:12.937 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:12.937 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:12.937 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:12.937 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:12.937 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:12.937 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:12.937 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:12.937 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:12.937 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:12.937 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:12.937 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:12.937 1+0 records in 00:14:12.937 1+0 records out 00:14:12.937 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447302 s, 9.2 MB/s 00:14:12.937 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.937 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:12.937 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.937 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:12.937 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:12.937 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:12.937 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:12.937 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:12.937 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:12.937 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:12.937 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:12.937 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:12.937 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:12.937 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:12.937 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:12.937 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:12.937 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:12.937 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:12.937 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:12.937 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:12.937 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:12.937 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:13.197 /dev/nbd1 00:14:13.197 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:13.197 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:13.197 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:13.197 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:13.197 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:13.197 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:13.197 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:13.197 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:13.197 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:13.197 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:13.197 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:13.197 1+0 records in 00:14:13.197 1+0 records out 00:14:13.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412575 s, 9.9 MB/s 00:14:13.197 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.197 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:13.197 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.197 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:13.197 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:13.197 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:13.197 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:13.197 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:13.457 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:13.457 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:13.457 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:13.457 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:13.457 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:13.457 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:13.457 03:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:13.716 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:13.716 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:13.716 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:13.716 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:13.716 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:13.716 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:13.717 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:13.717 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:13.717 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:13.717 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:13.717 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:13.717 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:13.717 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:13.717 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:13.717 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:13.717 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:13.717 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:13.717 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:13.717 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:13.717 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:13.717 /dev/nbd1 00:14:13.717 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:13.717 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:13.717 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:13.717 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:13.717 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:13.717 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:13.717 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:13.977 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:13.977 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:13.977 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:13.977 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:13.977 1+0 records in 00:14:13.977 1+0 records out 00:14:13.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372251 s, 11.0 MB/s 00:14:13.977 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.977 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:13.977 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.977 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:13.977 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:13.977 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:13.977 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:13.977 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:13.977 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:13.977 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:13.977 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:13.977 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:13.977 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:13.977 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:13.977 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:14.237 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:14.237 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:14.237 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:14.237 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:14.237 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:14.237 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:14.237 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:14.237 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:14.237 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:14.237 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:14.237 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:14.237 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:14.237 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:14.237 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:14.237 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:14.497 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:14.497 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:14.497 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.498 [2024-11-21 03:23:01.838566] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:14.498 [2024-11-21 03:23:01.838621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.498 [2024-11-21 03:23:01.838640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:14.498 [2024-11-21 03:23:01.838654] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.498 [2024-11-21 03:23:01.841073] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.498 [2024-11-21 03:23:01.841110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:14.498 [2024-11-21 03:23:01.841191] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:14.498 [2024-11-21 03:23:01.841237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:14.498 [2024-11-21 03:23:01.841342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:14.498 [2024-11-21 03:23:01.841443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:14.498 spare 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.498 [2024-11-21 03:23:01.941519] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:14.498 [2024-11-21 03:23:01.941560] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:14.498 [2024-11-21 03:23:01.941877] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037570 00:14:14.498 [2024-11-21 03:23:01.942074] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:14.498 [2024-11-21 03:23:01.942092] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:14.498 [2024-11-21 03:23:01.942240] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.498 "name": "raid_bdev1", 00:14:14.498 "uuid": "86e952ad-8a92-4dcf-a76b-7c15a7e4b442", 00:14:14.498 "strip_size_kb": 0, 00:14:14.498 "state": "online", 00:14:14.498 "raid_level": "raid1", 00:14:14.498 "superblock": true, 00:14:14.498 "num_base_bdevs": 4, 00:14:14.498 "num_base_bdevs_discovered": 3, 00:14:14.498 "num_base_bdevs_operational": 3, 00:14:14.498 "base_bdevs_list": [ 00:14:14.498 { 00:14:14.498 "name": "spare", 00:14:14.498 "uuid": "92eea285-3604-541f-a12f-f658e7fab47f", 00:14:14.498 "is_configured": true, 00:14:14.498 "data_offset": 2048, 00:14:14.498 "data_size": 63488 00:14:14.498 }, 00:14:14.498 { 00:14:14.498 "name": null, 00:14:14.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.498 "is_configured": false, 00:14:14.498 "data_offset": 2048, 00:14:14.498 "data_size": 63488 00:14:14.498 }, 00:14:14.498 { 00:14:14.498 "name": "BaseBdev3", 00:14:14.498 "uuid": "b29a23da-0464-5c76-b0b8-ce0b6766afca", 00:14:14.498 "is_configured": true, 00:14:14.498 "data_offset": 2048, 00:14:14.498 "data_size": 63488 00:14:14.498 }, 00:14:14.498 { 00:14:14.498 "name": "BaseBdev4", 00:14:14.498 "uuid": "a26e5307-89f6-5681-8488-bea4fbb6a2f0", 00:14:14.498 "is_configured": true, 00:14:14.498 "data_offset": 2048, 00:14:14.498 "data_size": 63488 00:14:14.498 } 00:14:14.498 ] 00:14:14.498 }' 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.498 03:23:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.067 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:15.067 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.067 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:15.067 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:15.067 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.067 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.067 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.067 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.067 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.067 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.067 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.067 "name": "raid_bdev1", 00:14:15.067 "uuid": "86e952ad-8a92-4dcf-a76b-7c15a7e4b442", 00:14:15.067 "strip_size_kb": 0, 00:14:15.067 "state": "online", 00:14:15.067 "raid_level": "raid1", 00:14:15.067 "superblock": true, 00:14:15.067 "num_base_bdevs": 4, 00:14:15.067 "num_base_bdevs_discovered": 3, 00:14:15.067 "num_base_bdevs_operational": 3, 00:14:15.067 "base_bdevs_list": [ 00:14:15.067 { 00:14:15.067 "name": "spare", 00:14:15.067 "uuid": "92eea285-3604-541f-a12f-f658e7fab47f", 00:14:15.067 "is_configured": true, 00:14:15.067 "data_offset": 2048, 00:14:15.067 "data_size": 63488 00:14:15.067 }, 00:14:15.067 { 00:14:15.067 "name": null, 00:14:15.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.067 "is_configured": false, 00:14:15.067 "data_offset": 2048, 00:14:15.067 "data_size": 63488 00:14:15.067 }, 00:14:15.067 { 00:14:15.067 "name": "BaseBdev3", 00:14:15.067 "uuid": "b29a23da-0464-5c76-b0b8-ce0b6766afca", 00:14:15.067 "is_configured": true, 00:14:15.067 "data_offset": 2048, 00:14:15.067 "data_size": 63488 00:14:15.067 }, 00:14:15.067 { 00:14:15.067 "name": "BaseBdev4", 00:14:15.067 "uuid": "a26e5307-89f6-5681-8488-bea4fbb6a2f0", 00:14:15.067 "is_configured": true, 00:14:15.067 "data_offset": 2048, 00:14:15.067 "data_size": 63488 00:14:15.067 } 00:14:15.067 ] 00:14:15.067 }' 00:14:15.067 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.067 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:15.067 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.067 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:15.067 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.067 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.067 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.067 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:15.067 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.067 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:15.068 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:15.068 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.068 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.068 [2024-11-21 03:23:02.586876] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:15.068 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.068 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:15.068 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.068 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.068 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:15.068 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:15.068 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:15.068 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.068 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.068 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.068 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.068 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.068 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.068 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.068 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.068 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.326 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.326 "name": "raid_bdev1", 00:14:15.326 "uuid": "86e952ad-8a92-4dcf-a76b-7c15a7e4b442", 00:14:15.326 "strip_size_kb": 0, 00:14:15.326 "state": "online", 00:14:15.326 "raid_level": "raid1", 00:14:15.326 "superblock": true, 00:14:15.326 "num_base_bdevs": 4, 00:14:15.326 "num_base_bdevs_discovered": 2, 00:14:15.326 "num_base_bdevs_operational": 2, 00:14:15.326 "base_bdevs_list": [ 00:14:15.326 { 00:14:15.326 "name": null, 00:14:15.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.326 "is_configured": false, 00:14:15.326 "data_offset": 0, 00:14:15.326 "data_size": 63488 00:14:15.326 }, 00:14:15.326 { 00:14:15.326 "name": null, 00:14:15.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.326 "is_configured": false, 00:14:15.326 "data_offset": 2048, 00:14:15.326 "data_size": 63488 00:14:15.326 }, 00:14:15.326 { 00:14:15.326 "name": "BaseBdev3", 00:14:15.326 "uuid": "b29a23da-0464-5c76-b0b8-ce0b6766afca", 00:14:15.326 "is_configured": true, 00:14:15.326 "data_offset": 2048, 00:14:15.326 "data_size": 63488 00:14:15.326 }, 00:14:15.326 { 00:14:15.326 "name": "BaseBdev4", 00:14:15.326 "uuid": "a26e5307-89f6-5681-8488-bea4fbb6a2f0", 00:14:15.326 "is_configured": true, 00:14:15.326 "data_offset": 2048, 00:14:15.326 "data_size": 63488 00:14:15.326 } 00:14:15.326 ] 00:14:15.326 }' 00:14:15.326 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.326 03:23:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.585 03:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:15.585 03:23:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.585 03:23:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.585 [2024-11-21 03:23:03.007124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:15.585 [2024-11-21 03:23:03.007323] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:15.585 [2024-11-21 03:23:03.007349] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:15.585 [2024-11-21 03:23:03.007388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:15.585 [2024-11-21 03:23:03.011960] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037640 00:14:15.585 03:23:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.585 03:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:15.585 [2024-11-21 03:23:03.013955] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:16.524 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.524 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.524 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.524 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.524 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.524 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.524 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.524 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.524 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.524 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.524 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.524 "name": "raid_bdev1", 00:14:16.524 "uuid": "86e952ad-8a92-4dcf-a76b-7c15a7e4b442", 00:14:16.524 "strip_size_kb": 0, 00:14:16.524 "state": "online", 00:14:16.524 "raid_level": "raid1", 00:14:16.524 "superblock": true, 00:14:16.524 "num_base_bdevs": 4, 00:14:16.524 "num_base_bdevs_discovered": 3, 00:14:16.524 "num_base_bdevs_operational": 3, 00:14:16.524 "process": { 00:14:16.524 "type": "rebuild", 00:14:16.524 "target": "spare", 00:14:16.524 "progress": { 00:14:16.524 "blocks": 20480, 00:14:16.524 "percent": 32 00:14:16.524 } 00:14:16.524 }, 00:14:16.524 "base_bdevs_list": [ 00:14:16.524 { 00:14:16.524 "name": "spare", 00:14:16.524 "uuid": "92eea285-3604-541f-a12f-f658e7fab47f", 00:14:16.524 "is_configured": true, 00:14:16.524 "data_offset": 2048, 00:14:16.524 "data_size": 63488 00:14:16.524 }, 00:14:16.524 { 00:14:16.524 "name": null, 00:14:16.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.524 "is_configured": false, 00:14:16.524 "data_offset": 2048, 00:14:16.524 "data_size": 63488 00:14:16.524 }, 00:14:16.524 { 00:14:16.524 "name": "BaseBdev3", 00:14:16.524 "uuid": "b29a23da-0464-5c76-b0b8-ce0b6766afca", 00:14:16.524 "is_configured": true, 00:14:16.524 "data_offset": 2048, 00:14:16.524 "data_size": 63488 00:14:16.524 }, 00:14:16.524 { 00:14:16.524 "name": "BaseBdev4", 00:14:16.524 "uuid": "a26e5307-89f6-5681-8488-bea4fbb6a2f0", 00:14:16.524 "is_configured": true, 00:14:16.524 "data_offset": 2048, 00:14:16.524 "data_size": 63488 00:14:16.524 } 00:14:16.524 ] 00:14:16.524 }' 00:14:16.524 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.784 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.784 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.784 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.784 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:16.784 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.784 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.784 [2024-11-21 03:23:04.148707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:16.784 [2024-11-21 03:23:04.220619] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:16.784 [2024-11-21 03:23:04.220684] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.784 [2024-11-21 03:23:04.220699] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:16.784 [2024-11-21 03:23:04.220707] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:16.784 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.784 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:16.784 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.784 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.784 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.784 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.784 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:16.784 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.784 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.784 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.784 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.784 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.784 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.784 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.784 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.784 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.784 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.784 "name": "raid_bdev1", 00:14:16.784 "uuid": "86e952ad-8a92-4dcf-a76b-7c15a7e4b442", 00:14:16.784 "strip_size_kb": 0, 00:14:16.784 "state": "online", 00:14:16.784 "raid_level": "raid1", 00:14:16.784 "superblock": true, 00:14:16.784 "num_base_bdevs": 4, 00:14:16.784 "num_base_bdevs_discovered": 2, 00:14:16.784 "num_base_bdevs_operational": 2, 00:14:16.784 "base_bdevs_list": [ 00:14:16.784 { 00:14:16.784 "name": null, 00:14:16.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.784 "is_configured": false, 00:14:16.784 "data_offset": 0, 00:14:16.784 "data_size": 63488 00:14:16.784 }, 00:14:16.784 { 00:14:16.784 "name": null, 00:14:16.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.784 "is_configured": false, 00:14:16.784 "data_offset": 2048, 00:14:16.784 "data_size": 63488 00:14:16.784 }, 00:14:16.784 { 00:14:16.784 "name": "BaseBdev3", 00:14:16.784 "uuid": "b29a23da-0464-5c76-b0b8-ce0b6766afca", 00:14:16.784 "is_configured": true, 00:14:16.784 "data_offset": 2048, 00:14:16.784 "data_size": 63488 00:14:16.784 }, 00:14:16.784 { 00:14:16.784 "name": "BaseBdev4", 00:14:16.784 "uuid": "a26e5307-89f6-5681-8488-bea4fbb6a2f0", 00:14:16.784 "is_configured": true, 00:14:16.784 "data_offset": 2048, 00:14:16.784 "data_size": 63488 00:14:16.784 } 00:14:16.784 ] 00:14:16.784 }' 00:14:16.784 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.784 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.353 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:17.353 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.353 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.353 [2024-11-21 03:23:04.669469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:17.353 [2024-11-21 03:23:04.669538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.353 [2024-11-21 03:23:04.669560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:17.353 [2024-11-21 03:23:04.669571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.353 [2024-11-21 03:23:04.670010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.353 [2024-11-21 03:23:04.670054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:17.353 [2024-11-21 03:23:04.670149] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:17.353 [2024-11-21 03:23:04.670173] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:17.353 [2024-11-21 03:23:04.670184] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:17.353 [2024-11-21 03:23:04.670207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:17.353 [2024-11-21 03:23:04.674696] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037710 00:14:17.353 spare 00:14:17.353 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.353 03:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:17.353 [2024-11-21 03:23:04.676665] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:18.293 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.293 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.293 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.293 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.293 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.293 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.293 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.293 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.293 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.293 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.293 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.293 "name": "raid_bdev1", 00:14:18.293 "uuid": "86e952ad-8a92-4dcf-a76b-7c15a7e4b442", 00:14:18.293 "strip_size_kb": 0, 00:14:18.293 "state": "online", 00:14:18.293 "raid_level": "raid1", 00:14:18.293 "superblock": true, 00:14:18.293 "num_base_bdevs": 4, 00:14:18.293 "num_base_bdevs_discovered": 3, 00:14:18.293 "num_base_bdevs_operational": 3, 00:14:18.293 "process": { 00:14:18.293 "type": "rebuild", 00:14:18.293 "target": "spare", 00:14:18.293 "progress": { 00:14:18.293 "blocks": 20480, 00:14:18.293 "percent": 32 00:14:18.293 } 00:14:18.293 }, 00:14:18.293 "base_bdevs_list": [ 00:14:18.293 { 00:14:18.293 "name": "spare", 00:14:18.293 "uuid": "92eea285-3604-541f-a12f-f658e7fab47f", 00:14:18.293 "is_configured": true, 00:14:18.293 "data_offset": 2048, 00:14:18.293 "data_size": 63488 00:14:18.293 }, 00:14:18.293 { 00:14:18.293 "name": null, 00:14:18.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.293 "is_configured": false, 00:14:18.293 "data_offset": 2048, 00:14:18.293 "data_size": 63488 00:14:18.293 }, 00:14:18.293 { 00:14:18.293 "name": "BaseBdev3", 00:14:18.293 "uuid": "b29a23da-0464-5c76-b0b8-ce0b6766afca", 00:14:18.293 "is_configured": true, 00:14:18.293 "data_offset": 2048, 00:14:18.293 "data_size": 63488 00:14:18.293 }, 00:14:18.293 { 00:14:18.293 "name": "BaseBdev4", 00:14:18.293 "uuid": "a26e5307-89f6-5681-8488-bea4fbb6a2f0", 00:14:18.293 "is_configured": true, 00:14:18.294 "data_offset": 2048, 00:14:18.294 "data_size": 63488 00:14:18.294 } 00:14:18.294 ] 00:14:18.294 }' 00:14:18.294 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.294 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:18.294 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.294 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:18.294 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:18.294 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.294 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.294 [2024-11-21 03:23:05.831333] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:18.553 [2024-11-21 03:23:05.883268] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:18.553 [2024-11-21 03:23:05.883328] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.553 [2024-11-21 03:23:05.883346] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:18.553 [2024-11-21 03:23:05.883353] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:18.553 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.553 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:18.553 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.553 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.553 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.553 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.553 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:18.553 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.553 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.553 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.553 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.553 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.553 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.553 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.553 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.553 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.553 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.553 "name": "raid_bdev1", 00:14:18.553 "uuid": "86e952ad-8a92-4dcf-a76b-7c15a7e4b442", 00:14:18.553 "strip_size_kb": 0, 00:14:18.553 "state": "online", 00:14:18.553 "raid_level": "raid1", 00:14:18.553 "superblock": true, 00:14:18.553 "num_base_bdevs": 4, 00:14:18.553 "num_base_bdevs_discovered": 2, 00:14:18.553 "num_base_bdevs_operational": 2, 00:14:18.553 "base_bdevs_list": [ 00:14:18.553 { 00:14:18.553 "name": null, 00:14:18.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.553 "is_configured": false, 00:14:18.553 "data_offset": 0, 00:14:18.553 "data_size": 63488 00:14:18.553 }, 00:14:18.553 { 00:14:18.553 "name": null, 00:14:18.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.553 "is_configured": false, 00:14:18.553 "data_offset": 2048, 00:14:18.553 "data_size": 63488 00:14:18.553 }, 00:14:18.553 { 00:14:18.553 "name": "BaseBdev3", 00:14:18.553 "uuid": "b29a23da-0464-5c76-b0b8-ce0b6766afca", 00:14:18.553 "is_configured": true, 00:14:18.553 "data_offset": 2048, 00:14:18.553 "data_size": 63488 00:14:18.553 }, 00:14:18.554 { 00:14:18.554 "name": "BaseBdev4", 00:14:18.554 "uuid": "a26e5307-89f6-5681-8488-bea4fbb6a2f0", 00:14:18.554 "is_configured": true, 00:14:18.554 "data_offset": 2048, 00:14:18.554 "data_size": 63488 00:14:18.554 } 00:14:18.554 ] 00:14:18.554 }' 00:14:18.554 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.554 03:23:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.813 03:23:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:18.813 03:23:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.813 03:23:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:18.813 03:23:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:18.813 03:23:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.813 03:23:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.813 03:23:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.814 03:23:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.814 03:23:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.814 03:23:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.814 03:23:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.814 "name": "raid_bdev1", 00:14:18.814 "uuid": "86e952ad-8a92-4dcf-a76b-7c15a7e4b442", 00:14:18.814 "strip_size_kb": 0, 00:14:18.814 "state": "online", 00:14:18.814 "raid_level": "raid1", 00:14:18.814 "superblock": true, 00:14:18.814 "num_base_bdevs": 4, 00:14:18.814 "num_base_bdevs_discovered": 2, 00:14:18.814 "num_base_bdevs_operational": 2, 00:14:18.814 "base_bdevs_list": [ 00:14:18.814 { 00:14:18.814 "name": null, 00:14:18.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.814 "is_configured": false, 00:14:18.814 "data_offset": 0, 00:14:18.814 "data_size": 63488 00:14:18.814 }, 00:14:18.814 { 00:14:18.814 "name": null, 00:14:18.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.814 "is_configured": false, 00:14:18.814 "data_offset": 2048, 00:14:18.814 "data_size": 63488 00:14:18.814 }, 00:14:18.814 { 00:14:18.814 "name": "BaseBdev3", 00:14:18.814 "uuid": "b29a23da-0464-5c76-b0b8-ce0b6766afca", 00:14:18.814 "is_configured": true, 00:14:18.814 "data_offset": 2048, 00:14:18.814 "data_size": 63488 00:14:18.814 }, 00:14:18.814 { 00:14:18.814 "name": "BaseBdev4", 00:14:18.814 "uuid": "a26e5307-89f6-5681-8488-bea4fbb6a2f0", 00:14:18.814 "is_configured": true, 00:14:18.814 "data_offset": 2048, 00:14:18.814 "data_size": 63488 00:14:18.814 } 00:14:18.814 ] 00:14:18.814 }' 00:14:19.073 03:23:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.073 03:23:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:19.073 03:23:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.073 03:23:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:19.073 03:23:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:19.073 03:23:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.073 03:23:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.073 03:23:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.073 03:23:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:19.073 03:23:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.073 03:23:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.073 [2024-11-21 03:23:06.488212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:19.073 [2024-11-21 03:23:06.488323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.073 [2024-11-21 03:23:06.488354] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:14:19.073 [2024-11-21 03:23:06.488363] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.073 [2024-11-21 03:23:06.488775] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.073 [2024-11-21 03:23:06.488794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:19.073 [2024-11-21 03:23:06.488874] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:19.073 [2024-11-21 03:23:06.488891] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:19.073 [2024-11-21 03:23:06.488900] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:19.073 [2024-11-21 03:23:06.488910] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:19.073 BaseBdev1 00:14:19.073 03:23:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.073 03:23:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:20.012 03:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:20.012 03:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.012 03:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.012 03:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.012 03:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.012 03:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:20.012 03:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.012 03:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.012 03:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.012 03:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.012 03:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.012 03:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.012 03:23:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.012 03:23:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.012 03:23:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.012 03:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.012 "name": "raid_bdev1", 00:14:20.012 "uuid": "86e952ad-8a92-4dcf-a76b-7c15a7e4b442", 00:14:20.012 "strip_size_kb": 0, 00:14:20.012 "state": "online", 00:14:20.012 "raid_level": "raid1", 00:14:20.012 "superblock": true, 00:14:20.012 "num_base_bdevs": 4, 00:14:20.012 "num_base_bdevs_discovered": 2, 00:14:20.012 "num_base_bdevs_operational": 2, 00:14:20.012 "base_bdevs_list": [ 00:14:20.012 { 00:14:20.012 "name": null, 00:14:20.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.012 "is_configured": false, 00:14:20.012 "data_offset": 0, 00:14:20.012 "data_size": 63488 00:14:20.012 }, 00:14:20.012 { 00:14:20.012 "name": null, 00:14:20.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.012 "is_configured": false, 00:14:20.012 "data_offset": 2048, 00:14:20.012 "data_size": 63488 00:14:20.012 }, 00:14:20.012 { 00:14:20.012 "name": "BaseBdev3", 00:14:20.012 "uuid": "b29a23da-0464-5c76-b0b8-ce0b6766afca", 00:14:20.012 "is_configured": true, 00:14:20.012 "data_offset": 2048, 00:14:20.012 "data_size": 63488 00:14:20.012 }, 00:14:20.012 { 00:14:20.012 "name": "BaseBdev4", 00:14:20.012 "uuid": "a26e5307-89f6-5681-8488-bea4fbb6a2f0", 00:14:20.012 "is_configured": true, 00:14:20.012 "data_offset": 2048, 00:14:20.012 "data_size": 63488 00:14:20.012 } 00:14:20.012 ] 00:14:20.012 }' 00:14:20.012 03:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.012 03:23:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.582 03:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:20.582 03:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.582 03:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:20.582 03:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:20.582 03:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.582 03:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.582 03:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.582 03:23:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.582 03:23:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.582 03:23:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.582 03:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.582 "name": "raid_bdev1", 00:14:20.582 "uuid": "86e952ad-8a92-4dcf-a76b-7c15a7e4b442", 00:14:20.582 "strip_size_kb": 0, 00:14:20.582 "state": "online", 00:14:20.582 "raid_level": "raid1", 00:14:20.582 "superblock": true, 00:14:20.582 "num_base_bdevs": 4, 00:14:20.582 "num_base_bdevs_discovered": 2, 00:14:20.582 "num_base_bdevs_operational": 2, 00:14:20.582 "base_bdevs_list": [ 00:14:20.582 { 00:14:20.582 "name": null, 00:14:20.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.582 "is_configured": false, 00:14:20.582 "data_offset": 0, 00:14:20.582 "data_size": 63488 00:14:20.582 }, 00:14:20.582 { 00:14:20.582 "name": null, 00:14:20.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.582 "is_configured": false, 00:14:20.582 "data_offset": 2048, 00:14:20.582 "data_size": 63488 00:14:20.582 }, 00:14:20.582 { 00:14:20.582 "name": "BaseBdev3", 00:14:20.582 "uuid": "b29a23da-0464-5c76-b0b8-ce0b6766afca", 00:14:20.582 "is_configured": true, 00:14:20.582 "data_offset": 2048, 00:14:20.582 "data_size": 63488 00:14:20.582 }, 00:14:20.582 { 00:14:20.582 "name": "BaseBdev4", 00:14:20.582 "uuid": "a26e5307-89f6-5681-8488-bea4fbb6a2f0", 00:14:20.582 "is_configured": true, 00:14:20.582 "data_offset": 2048, 00:14:20.582 "data_size": 63488 00:14:20.582 } 00:14:20.582 ] 00:14:20.582 }' 00:14:20.582 03:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.582 03:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:20.582 03:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.582 03:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:20.582 03:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:20.582 03:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:14:20.582 03:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:20.582 03:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:20.582 03:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:20.582 03:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:20.582 03:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:20.582 03:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:20.582 03:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.582 03:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.582 [2024-11-21 03:23:08.084847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:20.582 [2024-11-21 03:23:08.085011] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:20.582 [2024-11-21 03:23:08.085027] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:20.582 request: 00:14:20.582 { 00:14:20.582 "base_bdev": "BaseBdev1", 00:14:20.582 "raid_bdev": "raid_bdev1", 00:14:20.582 "method": "bdev_raid_add_base_bdev", 00:14:20.582 "req_id": 1 00:14:20.582 } 00:14:20.582 Got JSON-RPC error response 00:14:20.582 response: 00:14:20.582 { 00:14:20.582 "code": -22, 00:14:20.582 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:20.582 } 00:14:20.582 03:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:20.582 03:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:14:20.582 03:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:20.582 03:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:20.582 03:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:20.582 03:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:21.962 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:21.962 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:21.962 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.962 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:21.962 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:21.962 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:21.962 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.962 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.962 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.962 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.962 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.962 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.962 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.962 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.962 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.962 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.962 "name": "raid_bdev1", 00:14:21.962 "uuid": "86e952ad-8a92-4dcf-a76b-7c15a7e4b442", 00:14:21.962 "strip_size_kb": 0, 00:14:21.962 "state": "online", 00:14:21.962 "raid_level": "raid1", 00:14:21.962 "superblock": true, 00:14:21.962 "num_base_bdevs": 4, 00:14:21.962 "num_base_bdevs_discovered": 2, 00:14:21.962 "num_base_bdevs_operational": 2, 00:14:21.962 "base_bdevs_list": [ 00:14:21.962 { 00:14:21.962 "name": null, 00:14:21.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.962 "is_configured": false, 00:14:21.962 "data_offset": 0, 00:14:21.962 "data_size": 63488 00:14:21.962 }, 00:14:21.962 { 00:14:21.962 "name": null, 00:14:21.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.962 "is_configured": false, 00:14:21.962 "data_offset": 2048, 00:14:21.962 "data_size": 63488 00:14:21.962 }, 00:14:21.962 { 00:14:21.962 "name": "BaseBdev3", 00:14:21.962 "uuid": "b29a23da-0464-5c76-b0b8-ce0b6766afca", 00:14:21.962 "is_configured": true, 00:14:21.962 "data_offset": 2048, 00:14:21.962 "data_size": 63488 00:14:21.962 }, 00:14:21.962 { 00:14:21.962 "name": "BaseBdev4", 00:14:21.962 "uuid": "a26e5307-89f6-5681-8488-bea4fbb6a2f0", 00:14:21.962 "is_configured": true, 00:14:21.962 "data_offset": 2048, 00:14:21.962 "data_size": 63488 00:14:21.962 } 00:14:21.962 ] 00:14:21.962 }' 00:14:21.962 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.962 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.962 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:21.962 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.962 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:21.962 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:21.962 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.962 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.962 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.962 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.962 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.222 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.222 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.222 "name": "raid_bdev1", 00:14:22.222 "uuid": "86e952ad-8a92-4dcf-a76b-7c15a7e4b442", 00:14:22.222 "strip_size_kb": 0, 00:14:22.222 "state": "online", 00:14:22.222 "raid_level": "raid1", 00:14:22.222 "superblock": true, 00:14:22.222 "num_base_bdevs": 4, 00:14:22.222 "num_base_bdevs_discovered": 2, 00:14:22.222 "num_base_bdevs_operational": 2, 00:14:22.222 "base_bdevs_list": [ 00:14:22.222 { 00:14:22.222 "name": null, 00:14:22.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.222 "is_configured": false, 00:14:22.222 "data_offset": 0, 00:14:22.222 "data_size": 63488 00:14:22.222 }, 00:14:22.222 { 00:14:22.222 "name": null, 00:14:22.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.222 "is_configured": false, 00:14:22.222 "data_offset": 2048, 00:14:22.222 "data_size": 63488 00:14:22.222 }, 00:14:22.222 { 00:14:22.222 "name": "BaseBdev3", 00:14:22.222 "uuid": "b29a23da-0464-5c76-b0b8-ce0b6766afca", 00:14:22.222 "is_configured": true, 00:14:22.222 "data_offset": 2048, 00:14:22.222 "data_size": 63488 00:14:22.222 }, 00:14:22.222 { 00:14:22.222 "name": "BaseBdev4", 00:14:22.222 "uuid": "a26e5307-89f6-5681-8488-bea4fbb6a2f0", 00:14:22.222 "is_configured": true, 00:14:22.222 "data_offset": 2048, 00:14:22.222 "data_size": 63488 00:14:22.222 } 00:14:22.222 ] 00:14:22.222 }' 00:14:22.222 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.222 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:22.222 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.222 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:22.222 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 91794 00:14:22.222 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 91794 ']' 00:14:22.222 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 91794 00:14:22.222 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:14:22.222 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:22.222 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91794 00:14:22.222 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:22.222 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:22.222 killing process with pid 91794 00:14:22.222 Received shutdown signal, test time was about 17.713107 seconds 00:14:22.222 00:14:22.222 Latency(us) 00:14:22.222 [2024-11-21T03:23:09.788Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.222 [2024-11-21T03:23:09.788Z] =================================================================================================================== 00:14:22.222 [2024-11-21T03:23:09.788Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:22.222 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91794' 00:14:22.222 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 91794 00:14:22.222 [2024-11-21 03:23:09.671649] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:22.222 [2024-11-21 03:23:09.671796] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:22.222 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 91794 00:14:22.222 [2024-11-21 03:23:09.671871] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:22.222 [2024-11-21 03:23:09.671884] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:22.222 [2024-11-21 03:23:09.719512] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:22.483 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:22.483 00:14:22.483 real 0m19.755s 00:14:22.483 user 0m26.272s 00:14:22.483 sys 0m2.642s 00:14:22.483 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:22.483 ************************************ 00:14:22.483 END TEST raid_rebuild_test_sb_io 00:14:22.483 ************************************ 00:14:22.483 03:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.483 03:23:09 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:22.483 03:23:09 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:14:22.483 03:23:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:22.483 03:23:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:22.483 03:23:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:22.483 ************************************ 00:14:22.483 START TEST raid5f_state_function_test 00:14:22.483 ************************************ 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:22.483 Process raid pid: 92500 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=92500 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 92500' 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 92500 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 92500 ']' 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:22.483 03:23:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.743 [2024-11-21 03:23:10.106637] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:14:22.743 [2024-11-21 03:23:10.106834] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.743 [2024-11-21 03:23:10.243170] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:22.743 [2024-11-21 03:23:10.279784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.002 [2024-11-21 03:23:10.309644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.002 [2024-11-21 03:23:10.352954] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:23.002 [2024-11-21 03:23:10.353074] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:23.571 03:23:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:23.571 03:23:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:23.571 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:23.571 03:23:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.571 03:23:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.571 [2024-11-21 03:23:10.940119] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:23.571 [2024-11-21 03:23:10.940253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:23.571 [2024-11-21 03:23:10.940299] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:23.571 [2024-11-21 03:23:10.940323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:23.571 [2024-11-21 03:23:10.940348] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:23.571 [2024-11-21 03:23:10.940403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:23.571 03:23:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.571 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:23.571 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.571 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.571 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:23.571 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.571 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.571 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.571 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.571 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.571 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.571 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.571 03:23:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.571 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.571 03:23:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.571 03:23:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.571 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.571 "name": "Existed_Raid", 00:14:23.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.571 "strip_size_kb": 64, 00:14:23.571 "state": "configuring", 00:14:23.571 "raid_level": "raid5f", 00:14:23.571 "superblock": false, 00:14:23.571 "num_base_bdevs": 3, 00:14:23.571 "num_base_bdevs_discovered": 0, 00:14:23.571 "num_base_bdevs_operational": 3, 00:14:23.571 "base_bdevs_list": [ 00:14:23.571 { 00:14:23.571 "name": "BaseBdev1", 00:14:23.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.571 "is_configured": false, 00:14:23.571 "data_offset": 0, 00:14:23.571 "data_size": 0 00:14:23.571 }, 00:14:23.571 { 00:14:23.571 "name": "BaseBdev2", 00:14:23.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.571 "is_configured": false, 00:14:23.571 "data_offset": 0, 00:14:23.571 "data_size": 0 00:14:23.571 }, 00:14:23.571 { 00:14:23.571 "name": "BaseBdev3", 00:14:23.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.571 "is_configured": false, 00:14:23.571 "data_offset": 0, 00:14:23.571 "data_size": 0 00:14:23.571 } 00:14:23.571 ] 00:14:23.571 }' 00:14:23.571 03:23:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.571 03:23:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.830 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:23.830 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.830 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.830 [2024-11-21 03:23:11.348130] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:23.830 [2024-11-21 03:23:11.348166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:14:23.830 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.830 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:23.830 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.830 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.830 [2024-11-21 03:23:11.356167] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:23.830 [2024-11-21 03:23:11.356253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:23.830 [2024-11-21 03:23:11.356268] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:23.830 [2024-11-21 03:23:11.356277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:23.830 [2024-11-21 03:23:11.356285] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:23.830 [2024-11-21 03:23:11.356294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:23.830 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.830 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:23.830 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.830 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.830 [2024-11-21 03:23:11.373328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:23.830 BaseBdev1 00:14:23.830 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.830 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:23.830 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:23.830 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:23.830 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:23.830 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:23.830 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:23.830 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:23.830 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.830 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.830 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.830 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:23.830 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.830 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.090 [ 00:14:24.090 { 00:14:24.090 "name": "BaseBdev1", 00:14:24.090 "aliases": [ 00:14:24.090 "45218523-6915-4188-9ef8-2c502ac5c560" 00:14:24.090 ], 00:14:24.090 "product_name": "Malloc disk", 00:14:24.090 "block_size": 512, 00:14:24.090 "num_blocks": 65536, 00:14:24.090 "uuid": "45218523-6915-4188-9ef8-2c502ac5c560", 00:14:24.090 "assigned_rate_limits": { 00:14:24.090 "rw_ios_per_sec": 0, 00:14:24.090 "rw_mbytes_per_sec": 0, 00:14:24.090 "r_mbytes_per_sec": 0, 00:14:24.090 "w_mbytes_per_sec": 0 00:14:24.090 }, 00:14:24.090 "claimed": true, 00:14:24.090 "claim_type": "exclusive_write", 00:14:24.090 "zoned": false, 00:14:24.090 "supported_io_types": { 00:14:24.090 "read": true, 00:14:24.090 "write": true, 00:14:24.090 "unmap": true, 00:14:24.090 "flush": true, 00:14:24.090 "reset": true, 00:14:24.090 "nvme_admin": false, 00:14:24.090 "nvme_io": false, 00:14:24.090 "nvme_io_md": false, 00:14:24.090 "write_zeroes": true, 00:14:24.090 "zcopy": true, 00:14:24.090 "get_zone_info": false, 00:14:24.090 "zone_management": false, 00:14:24.090 "zone_append": false, 00:14:24.090 "compare": false, 00:14:24.090 "compare_and_write": false, 00:14:24.090 "abort": true, 00:14:24.090 "seek_hole": false, 00:14:24.090 "seek_data": false, 00:14:24.090 "copy": true, 00:14:24.090 "nvme_iov_md": false 00:14:24.090 }, 00:14:24.090 "memory_domains": [ 00:14:24.090 { 00:14:24.090 "dma_device_id": "system", 00:14:24.090 "dma_device_type": 1 00:14:24.090 }, 00:14:24.090 { 00:14:24.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.090 "dma_device_type": 2 00:14:24.090 } 00:14:24.090 ], 00:14:24.090 "driver_specific": {} 00:14:24.090 } 00:14:24.090 ] 00:14:24.090 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.090 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:24.090 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:24.090 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.090 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.090 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:24.090 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.090 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.090 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.090 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.090 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.090 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.090 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.090 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.090 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.090 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.090 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.090 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.090 "name": "Existed_Raid", 00:14:24.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.090 "strip_size_kb": 64, 00:14:24.090 "state": "configuring", 00:14:24.090 "raid_level": "raid5f", 00:14:24.090 "superblock": false, 00:14:24.090 "num_base_bdevs": 3, 00:14:24.090 "num_base_bdevs_discovered": 1, 00:14:24.090 "num_base_bdevs_operational": 3, 00:14:24.090 "base_bdevs_list": [ 00:14:24.090 { 00:14:24.090 "name": "BaseBdev1", 00:14:24.090 "uuid": "45218523-6915-4188-9ef8-2c502ac5c560", 00:14:24.090 "is_configured": true, 00:14:24.090 "data_offset": 0, 00:14:24.090 "data_size": 65536 00:14:24.090 }, 00:14:24.090 { 00:14:24.090 "name": "BaseBdev2", 00:14:24.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.090 "is_configured": false, 00:14:24.090 "data_offset": 0, 00:14:24.090 "data_size": 0 00:14:24.090 }, 00:14:24.090 { 00:14:24.090 "name": "BaseBdev3", 00:14:24.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.090 "is_configured": false, 00:14:24.090 "data_offset": 0, 00:14:24.090 "data_size": 0 00:14:24.090 } 00:14:24.090 ] 00:14:24.090 }' 00:14:24.090 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.090 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.350 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:24.350 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.351 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.351 [2024-11-21 03:23:11.789516] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:24.351 [2024-11-21 03:23:11.789577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:24.351 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.351 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:24.351 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.351 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.351 [2024-11-21 03:23:11.801553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:24.351 [2024-11-21 03:23:11.803586] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:24.351 [2024-11-21 03:23:11.803662] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:24.351 [2024-11-21 03:23:11.803706] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:24.351 [2024-11-21 03:23:11.803732] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:24.351 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.351 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:24.351 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:24.351 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:24.351 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.351 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.351 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:24.351 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.351 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.351 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.351 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.351 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.351 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.351 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.351 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.351 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.351 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.351 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.351 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.351 "name": "Existed_Raid", 00:14:24.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.351 "strip_size_kb": 64, 00:14:24.351 "state": "configuring", 00:14:24.351 "raid_level": "raid5f", 00:14:24.351 "superblock": false, 00:14:24.351 "num_base_bdevs": 3, 00:14:24.351 "num_base_bdevs_discovered": 1, 00:14:24.351 "num_base_bdevs_operational": 3, 00:14:24.351 "base_bdevs_list": [ 00:14:24.351 { 00:14:24.351 "name": "BaseBdev1", 00:14:24.351 "uuid": "45218523-6915-4188-9ef8-2c502ac5c560", 00:14:24.351 "is_configured": true, 00:14:24.351 "data_offset": 0, 00:14:24.351 "data_size": 65536 00:14:24.351 }, 00:14:24.351 { 00:14:24.351 "name": "BaseBdev2", 00:14:24.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.351 "is_configured": false, 00:14:24.351 "data_offset": 0, 00:14:24.351 "data_size": 0 00:14:24.351 }, 00:14:24.351 { 00:14:24.351 "name": "BaseBdev3", 00:14:24.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.351 "is_configured": false, 00:14:24.351 "data_offset": 0, 00:14:24.351 "data_size": 0 00:14:24.351 } 00:14:24.351 ] 00:14:24.351 }' 00:14:24.351 03:23:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.351 03:23:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.921 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:24.921 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.921 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.921 [2024-11-21 03:23:12.252831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:24.921 BaseBdev2 00:14:24.921 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.921 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:24.921 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:24.921 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:24.921 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:24.921 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:24.921 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:24.921 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:24.921 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.921 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.921 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.921 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:24.921 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.921 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.921 [ 00:14:24.921 { 00:14:24.921 "name": "BaseBdev2", 00:14:24.921 "aliases": [ 00:14:24.921 "b8e8384e-3d96-4dc7-b7bb-86546bd7647f" 00:14:24.921 ], 00:14:24.921 "product_name": "Malloc disk", 00:14:24.921 "block_size": 512, 00:14:24.921 "num_blocks": 65536, 00:14:24.921 "uuid": "b8e8384e-3d96-4dc7-b7bb-86546bd7647f", 00:14:24.921 "assigned_rate_limits": { 00:14:24.921 "rw_ios_per_sec": 0, 00:14:24.921 "rw_mbytes_per_sec": 0, 00:14:24.921 "r_mbytes_per_sec": 0, 00:14:24.921 "w_mbytes_per_sec": 0 00:14:24.921 }, 00:14:24.921 "claimed": true, 00:14:24.921 "claim_type": "exclusive_write", 00:14:24.921 "zoned": false, 00:14:24.921 "supported_io_types": { 00:14:24.921 "read": true, 00:14:24.921 "write": true, 00:14:24.921 "unmap": true, 00:14:24.921 "flush": true, 00:14:24.921 "reset": true, 00:14:24.921 "nvme_admin": false, 00:14:24.921 "nvme_io": false, 00:14:24.921 "nvme_io_md": false, 00:14:24.921 "write_zeroes": true, 00:14:24.921 "zcopy": true, 00:14:24.921 "get_zone_info": false, 00:14:24.921 "zone_management": false, 00:14:24.921 "zone_append": false, 00:14:24.921 "compare": false, 00:14:24.921 "compare_and_write": false, 00:14:24.921 "abort": true, 00:14:24.921 "seek_hole": false, 00:14:24.921 "seek_data": false, 00:14:24.922 "copy": true, 00:14:24.922 "nvme_iov_md": false 00:14:24.922 }, 00:14:24.922 "memory_domains": [ 00:14:24.922 { 00:14:24.922 "dma_device_id": "system", 00:14:24.922 "dma_device_type": 1 00:14:24.922 }, 00:14:24.922 { 00:14:24.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.922 "dma_device_type": 2 00:14:24.922 } 00:14:24.922 ], 00:14:24.922 "driver_specific": {} 00:14:24.922 } 00:14:24.922 ] 00:14:24.922 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.922 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:24.922 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:24.922 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:24.922 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:24.922 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.922 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.922 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:24.922 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.922 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.922 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.922 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.922 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.922 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.922 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.922 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.922 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.922 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.922 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.922 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.922 "name": "Existed_Raid", 00:14:24.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.922 "strip_size_kb": 64, 00:14:24.922 "state": "configuring", 00:14:24.922 "raid_level": "raid5f", 00:14:24.922 "superblock": false, 00:14:24.922 "num_base_bdevs": 3, 00:14:24.922 "num_base_bdevs_discovered": 2, 00:14:24.922 "num_base_bdevs_operational": 3, 00:14:24.922 "base_bdevs_list": [ 00:14:24.922 { 00:14:24.922 "name": "BaseBdev1", 00:14:24.922 "uuid": "45218523-6915-4188-9ef8-2c502ac5c560", 00:14:24.922 "is_configured": true, 00:14:24.922 "data_offset": 0, 00:14:24.922 "data_size": 65536 00:14:24.922 }, 00:14:24.922 { 00:14:24.922 "name": "BaseBdev2", 00:14:24.922 "uuid": "b8e8384e-3d96-4dc7-b7bb-86546bd7647f", 00:14:24.922 "is_configured": true, 00:14:24.922 "data_offset": 0, 00:14:24.922 "data_size": 65536 00:14:24.922 }, 00:14:24.922 { 00:14:24.922 "name": "BaseBdev3", 00:14:24.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.922 "is_configured": false, 00:14:24.922 "data_offset": 0, 00:14:24.922 "data_size": 0 00:14:24.922 } 00:14:24.922 ] 00:14:24.922 }' 00:14:24.922 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.922 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.182 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:25.182 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.182 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.182 [2024-11-21 03:23:12.743473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:25.182 [2024-11-21 03:23:12.743541] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:25.182 [2024-11-21 03:23:12.743551] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:25.182 [2024-11-21 03:23:12.743872] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:25.182 [2024-11-21 03:23:12.744352] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:25.182 [2024-11-21 03:23:12.744376] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:14:25.182 [2024-11-21 03:23:12.744600] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.442 BaseBdev3 00:14:25.442 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.442 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:25.442 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:25.442 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:25.442 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:25.442 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:25.442 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:25.442 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:25.442 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.442 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.442 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.442 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:25.442 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.442 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.442 [ 00:14:25.442 { 00:14:25.442 "name": "BaseBdev3", 00:14:25.442 "aliases": [ 00:14:25.442 "950a8caa-a4cd-4b50-b9a3-12e9d02e78f1" 00:14:25.442 ], 00:14:25.442 "product_name": "Malloc disk", 00:14:25.442 "block_size": 512, 00:14:25.442 "num_blocks": 65536, 00:14:25.442 "uuid": "950a8caa-a4cd-4b50-b9a3-12e9d02e78f1", 00:14:25.442 "assigned_rate_limits": { 00:14:25.442 "rw_ios_per_sec": 0, 00:14:25.442 "rw_mbytes_per_sec": 0, 00:14:25.442 "r_mbytes_per_sec": 0, 00:14:25.442 "w_mbytes_per_sec": 0 00:14:25.442 }, 00:14:25.442 "claimed": true, 00:14:25.442 "claim_type": "exclusive_write", 00:14:25.442 "zoned": false, 00:14:25.442 "supported_io_types": { 00:14:25.442 "read": true, 00:14:25.442 "write": true, 00:14:25.442 "unmap": true, 00:14:25.442 "flush": true, 00:14:25.442 "reset": true, 00:14:25.442 "nvme_admin": false, 00:14:25.442 "nvme_io": false, 00:14:25.442 "nvme_io_md": false, 00:14:25.442 "write_zeroes": true, 00:14:25.442 "zcopy": true, 00:14:25.442 "get_zone_info": false, 00:14:25.442 "zone_management": false, 00:14:25.442 "zone_append": false, 00:14:25.442 "compare": false, 00:14:25.442 "compare_and_write": false, 00:14:25.442 "abort": true, 00:14:25.442 "seek_hole": false, 00:14:25.442 "seek_data": false, 00:14:25.442 "copy": true, 00:14:25.442 "nvme_iov_md": false 00:14:25.442 }, 00:14:25.442 "memory_domains": [ 00:14:25.442 { 00:14:25.442 "dma_device_id": "system", 00:14:25.442 "dma_device_type": 1 00:14:25.442 }, 00:14:25.442 { 00:14:25.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.442 "dma_device_type": 2 00:14:25.442 } 00:14:25.442 ], 00:14:25.442 "driver_specific": {} 00:14:25.442 } 00:14:25.442 ] 00:14:25.442 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.442 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:25.442 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:25.442 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:25.442 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:25.442 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.442 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.442 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.442 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.442 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.442 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.442 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.442 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.442 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.442 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.442 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.442 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.442 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.443 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.443 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.443 "name": "Existed_Raid", 00:14:25.443 "uuid": "fc7ea320-73e4-4df0-a680-c5f8e61abc76", 00:14:25.443 "strip_size_kb": 64, 00:14:25.443 "state": "online", 00:14:25.443 "raid_level": "raid5f", 00:14:25.443 "superblock": false, 00:14:25.443 "num_base_bdevs": 3, 00:14:25.443 "num_base_bdevs_discovered": 3, 00:14:25.443 "num_base_bdevs_operational": 3, 00:14:25.443 "base_bdevs_list": [ 00:14:25.443 { 00:14:25.443 "name": "BaseBdev1", 00:14:25.443 "uuid": "45218523-6915-4188-9ef8-2c502ac5c560", 00:14:25.443 "is_configured": true, 00:14:25.443 "data_offset": 0, 00:14:25.443 "data_size": 65536 00:14:25.443 }, 00:14:25.443 { 00:14:25.443 "name": "BaseBdev2", 00:14:25.443 "uuid": "b8e8384e-3d96-4dc7-b7bb-86546bd7647f", 00:14:25.443 "is_configured": true, 00:14:25.443 "data_offset": 0, 00:14:25.443 "data_size": 65536 00:14:25.443 }, 00:14:25.443 { 00:14:25.443 "name": "BaseBdev3", 00:14:25.443 "uuid": "950a8caa-a4cd-4b50-b9a3-12e9d02e78f1", 00:14:25.443 "is_configured": true, 00:14:25.443 "data_offset": 0, 00:14:25.443 "data_size": 65536 00:14:25.443 } 00:14:25.443 ] 00:14:25.443 }' 00:14:25.443 03:23:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.443 03:23:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.703 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:25.703 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:25.703 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:25.703 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:25.703 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:25.703 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:25.703 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:25.703 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:25.703 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.703 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.703 [2024-11-21 03:23:13.091925] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:25.703 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.703 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:25.703 "name": "Existed_Raid", 00:14:25.703 "aliases": [ 00:14:25.703 "fc7ea320-73e4-4df0-a680-c5f8e61abc76" 00:14:25.703 ], 00:14:25.703 "product_name": "Raid Volume", 00:14:25.703 "block_size": 512, 00:14:25.703 "num_blocks": 131072, 00:14:25.703 "uuid": "fc7ea320-73e4-4df0-a680-c5f8e61abc76", 00:14:25.703 "assigned_rate_limits": { 00:14:25.703 "rw_ios_per_sec": 0, 00:14:25.703 "rw_mbytes_per_sec": 0, 00:14:25.703 "r_mbytes_per_sec": 0, 00:14:25.703 "w_mbytes_per_sec": 0 00:14:25.703 }, 00:14:25.703 "claimed": false, 00:14:25.703 "zoned": false, 00:14:25.703 "supported_io_types": { 00:14:25.703 "read": true, 00:14:25.703 "write": true, 00:14:25.703 "unmap": false, 00:14:25.703 "flush": false, 00:14:25.703 "reset": true, 00:14:25.703 "nvme_admin": false, 00:14:25.703 "nvme_io": false, 00:14:25.703 "nvme_io_md": false, 00:14:25.703 "write_zeroes": true, 00:14:25.703 "zcopy": false, 00:14:25.703 "get_zone_info": false, 00:14:25.703 "zone_management": false, 00:14:25.703 "zone_append": false, 00:14:25.703 "compare": false, 00:14:25.703 "compare_and_write": false, 00:14:25.703 "abort": false, 00:14:25.703 "seek_hole": false, 00:14:25.703 "seek_data": false, 00:14:25.703 "copy": false, 00:14:25.703 "nvme_iov_md": false 00:14:25.703 }, 00:14:25.703 "driver_specific": { 00:14:25.703 "raid": { 00:14:25.703 "uuid": "fc7ea320-73e4-4df0-a680-c5f8e61abc76", 00:14:25.703 "strip_size_kb": 64, 00:14:25.703 "state": "online", 00:14:25.703 "raid_level": "raid5f", 00:14:25.703 "superblock": false, 00:14:25.703 "num_base_bdevs": 3, 00:14:25.703 "num_base_bdevs_discovered": 3, 00:14:25.703 "num_base_bdevs_operational": 3, 00:14:25.703 "base_bdevs_list": [ 00:14:25.703 { 00:14:25.703 "name": "BaseBdev1", 00:14:25.703 "uuid": "45218523-6915-4188-9ef8-2c502ac5c560", 00:14:25.703 "is_configured": true, 00:14:25.703 "data_offset": 0, 00:14:25.703 "data_size": 65536 00:14:25.703 }, 00:14:25.703 { 00:14:25.703 "name": "BaseBdev2", 00:14:25.703 "uuid": "b8e8384e-3d96-4dc7-b7bb-86546bd7647f", 00:14:25.703 "is_configured": true, 00:14:25.703 "data_offset": 0, 00:14:25.703 "data_size": 65536 00:14:25.703 }, 00:14:25.703 { 00:14:25.703 "name": "BaseBdev3", 00:14:25.703 "uuid": "950a8caa-a4cd-4b50-b9a3-12e9d02e78f1", 00:14:25.703 "is_configured": true, 00:14:25.703 "data_offset": 0, 00:14:25.703 "data_size": 65536 00:14:25.703 } 00:14:25.703 ] 00:14:25.703 } 00:14:25.703 } 00:14:25.703 }' 00:14:25.703 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:25.703 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:25.703 BaseBdev2 00:14:25.703 BaseBdev3' 00:14:25.703 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:25.703 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:25.703 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:25.703 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:25.703 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:25.703 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.703 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.703 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.703 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:25.703 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:25.703 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:25.703 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:25.703 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.703 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.703 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.964 [2024-11-21 03:23:13.335792] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.964 "name": "Existed_Raid", 00:14:25.964 "uuid": "fc7ea320-73e4-4df0-a680-c5f8e61abc76", 00:14:25.964 "strip_size_kb": 64, 00:14:25.964 "state": "online", 00:14:25.964 "raid_level": "raid5f", 00:14:25.964 "superblock": false, 00:14:25.964 "num_base_bdevs": 3, 00:14:25.964 "num_base_bdevs_discovered": 2, 00:14:25.964 "num_base_bdevs_operational": 2, 00:14:25.964 "base_bdevs_list": [ 00:14:25.964 { 00:14:25.964 "name": null, 00:14:25.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.964 "is_configured": false, 00:14:25.964 "data_offset": 0, 00:14:25.964 "data_size": 65536 00:14:25.964 }, 00:14:25.964 { 00:14:25.964 "name": "BaseBdev2", 00:14:25.964 "uuid": "b8e8384e-3d96-4dc7-b7bb-86546bd7647f", 00:14:25.964 "is_configured": true, 00:14:25.964 "data_offset": 0, 00:14:25.964 "data_size": 65536 00:14:25.964 }, 00:14:25.964 { 00:14:25.964 "name": "BaseBdev3", 00:14:25.964 "uuid": "950a8caa-a4cd-4b50-b9a3-12e9d02e78f1", 00:14:25.964 "is_configured": true, 00:14:25.964 "data_offset": 0, 00:14:25.964 "data_size": 65536 00:14:25.964 } 00:14:25.964 ] 00:14:25.964 }' 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.964 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.224 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:26.224 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:26.224 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.224 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:26.224 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.224 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.224 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.485 [2024-11-21 03:23:13.820289] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:26.485 [2024-11-21 03:23:13.820420] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:26.485 [2024-11-21 03:23:13.841182] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.485 [2024-11-21 03:23:13.901252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:26.485 [2024-11-21 03:23:13.901325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.485 BaseBdev2 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:26.485 03:23:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:26.485 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.485 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.485 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.485 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:26.485 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.485 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.485 [ 00:14:26.485 { 00:14:26.485 "name": "BaseBdev2", 00:14:26.485 "aliases": [ 00:14:26.485 "d932e8ab-0b23-466a-aabe-90689f88ec5e" 00:14:26.485 ], 00:14:26.485 "product_name": "Malloc disk", 00:14:26.485 "block_size": 512, 00:14:26.485 "num_blocks": 65536, 00:14:26.485 "uuid": "d932e8ab-0b23-466a-aabe-90689f88ec5e", 00:14:26.485 "assigned_rate_limits": { 00:14:26.485 "rw_ios_per_sec": 0, 00:14:26.485 "rw_mbytes_per_sec": 0, 00:14:26.485 "r_mbytes_per_sec": 0, 00:14:26.485 "w_mbytes_per_sec": 0 00:14:26.485 }, 00:14:26.485 "claimed": false, 00:14:26.485 "zoned": false, 00:14:26.485 "supported_io_types": { 00:14:26.485 "read": true, 00:14:26.485 "write": true, 00:14:26.485 "unmap": true, 00:14:26.485 "flush": true, 00:14:26.485 "reset": true, 00:14:26.485 "nvme_admin": false, 00:14:26.485 "nvme_io": false, 00:14:26.485 "nvme_io_md": false, 00:14:26.485 "write_zeroes": true, 00:14:26.485 "zcopy": true, 00:14:26.485 "get_zone_info": false, 00:14:26.485 "zone_management": false, 00:14:26.486 "zone_append": false, 00:14:26.486 "compare": false, 00:14:26.486 "compare_and_write": false, 00:14:26.486 "abort": true, 00:14:26.486 "seek_hole": false, 00:14:26.486 "seek_data": false, 00:14:26.486 "copy": true, 00:14:26.486 "nvme_iov_md": false 00:14:26.486 }, 00:14:26.486 "memory_domains": [ 00:14:26.486 { 00:14:26.486 "dma_device_id": "system", 00:14:26.486 "dma_device_type": 1 00:14:26.486 }, 00:14:26.486 { 00:14:26.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.486 "dma_device_type": 2 00:14:26.486 } 00:14:26.486 ], 00:14:26.486 "driver_specific": {} 00:14:26.486 } 00:14:26.486 ] 00:14:26.486 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.486 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:26.486 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:26.486 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:26.486 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:26.486 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.486 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.746 BaseBdev3 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.746 [ 00:14:26.746 { 00:14:26.746 "name": "BaseBdev3", 00:14:26.746 "aliases": [ 00:14:26.746 "3ce59e54-397c-4b8d-b36f-4ff4d2809c86" 00:14:26.746 ], 00:14:26.746 "product_name": "Malloc disk", 00:14:26.746 "block_size": 512, 00:14:26.746 "num_blocks": 65536, 00:14:26.746 "uuid": "3ce59e54-397c-4b8d-b36f-4ff4d2809c86", 00:14:26.746 "assigned_rate_limits": { 00:14:26.746 "rw_ios_per_sec": 0, 00:14:26.746 "rw_mbytes_per_sec": 0, 00:14:26.746 "r_mbytes_per_sec": 0, 00:14:26.746 "w_mbytes_per_sec": 0 00:14:26.746 }, 00:14:26.746 "claimed": false, 00:14:26.746 "zoned": false, 00:14:26.746 "supported_io_types": { 00:14:26.746 "read": true, 00:14:26.746 "write": true, 00:14:26.746 "unmap": true, 00:14:26.746 "flush": true, 00:14:26.746 "reset": true, 00:14:26.746 "nvme_admin": false, 00:14:26.746 "nvme_io": false, 00:14:26.746 "nvme_io_md": false, 00:14:26.746 "write_zeroes": true, 00:14:26.746 "zcopy": true, 00:14:26.746 "get_zone_info": false, 00:14:26.746 "zone_management": false, 00:14:26.746 "zone_append": false, 00:14:26.746 "compare": false, 00:14:26.746 "compare_and_write": false, 00:14:26.746 "abort": true, 00:14:26.746 "seek_hole": false, 00:14:26.746 "seek_data": false, 00:14:26.746 "copy": true, 00:14:26.746 "nvme_iov_md": false 00:14:26.746 }, 00:14:26.746 "memory_domains": [ 00:14:26.746 { 00:14:26.746 "dma_device_id": "system", 00:14:26.746 "dma_device_type": 1 00:14:26.746 }, 00:14:26.746 { 00:14:26.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.746 "dma_device_type": 2 00:14:26.746 } 00:14:26.746 ], 00:14:26.746 "driver_specific": {} 00:14:26.746 } 00:14:26.746 ] 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.746 [2024-11-21 03:23:14.103488] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:26.746 [2024-11-21 03:23:14.103557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:26.746 [2024-11-21 03:23:14.103577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:26.746 [2024-11-21 03:23:14.105721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.746 "name": "Existed_Raid", 00:14:26.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.746 "strip_size_kb": 64, 00:14:26.746 "state": "configuring", 00:14:26.746 "raid_level": "raid5f", 00:14:26.746 "superblock": false, 00:14:26.746 "num_base_bdevs": 3, 00:14:26.746 "num_base_bdevs_discovered": 2, 00:14:26.746 "num_base_bdevs_operational": 3, 00:14:26.746 "base_bdevs_list": [ 00:14:26.746 { 00:14:26.746 "name": "BaseBdev1", 00:14:26.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.746 "is_configured": false, 00:14:26.746 "data_offset": 0, 00:14:26.746 "data_size": 0 00:14:26.746 }, 00:14:26.746 { 00:14:26.746 "name": "BaseBdev2", 00:14:26.746 "uuid": "d932e8ab-0b23-466a-aabe-90689f88ec5e", 00:14:26.746 "is_configured": true, 00:14:26.746 "data_offset": 0, 00:14:26.746 "data_size": 65536 00:14:26.746 }, 00:14:26.746 { 00:14:26.746 "name": "BaseBdev3", 00:14:26.746 "uuid": "3ce59e54-397c-4b8d-b36f-4ff4d2809c86", 00:14:26.746 "is_configured": true, 00:14:26.746 "data_offset": 0, 00:14:26.746 "data_size": 65536 00:14:26.746 } 00:14:26.746 ] 00:14:26.746 }' 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.746 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.015 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:27.015 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.015 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.015 [2024-11-21 03:23:14.559628] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:27.015 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.015 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:27.015 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.015 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:27.015 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:27.015 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.015 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.015 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.015 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.015 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.016 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.016 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.016 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.307 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.307 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.307 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.307 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.307 "name": "Existed_Raid", 00:14:27.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.307 "strip_size_kb": 64, 00:14:27.307 "state": "configuring", 00:14:27.307 "raid_level": "raid5f", 00:14:27.307 "superblock": false, 00:14:27.307 "num_base_bdevs": 3, 00:14:27.308 "num_base_bdevs_discovered": 1, 00:14:27.308 "num_base_bdevs_operational": 3, 00:14:27.308 "base_bdevs_list": [ 00:14:27.308 { 00:14:27.308 "name": "BaseBdev1", 00:14:27.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.308 "is_configured": false, 00:14:27.308 "data_offset": 0, 00:14:27.308 "data_size": 0 00:14:27.308 }, 00:14:27.308 { 00:14:27.308 "name": null, 00:14:27.308 "uuid": "d932e8ab-0b23-466a-aabe-90689f88ec5e", 00:14:27.308 "is_configured": false, 00:14:27.308 "data_offset": 0, 00:14:27.308 "data_size": 65536 00:14:27.308 }, 00:14:27.308 { 00:14:27.308 "name": "BaseBdev3", 00:14:27.308 "uuid": "3ce59e54-397c-4b8d-b36f-4ff4d2809c86", 00:14:27.308 "is_configured": true, 00:14:27.308 "data_offset": 0, 00:14:27.308 "data_size": 65536 00:14:27.308 } 00:14:27.308 ] 00:14:27.308 }' 00:14:27.308 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.308 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.583 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:27.583 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.583 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.583 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.583 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.583 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:27.583 03:23:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:27.583 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.583 03:23:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.583 [2024-11-21 03:23:15.000557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:27.583 BaseBdev1 00:14:27.583 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.583 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:27.583 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:27.583 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:27.583 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:27.583 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:27.583 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:27.583 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:27.583 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.583 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.583 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.583 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:27.583 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.583 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.583 [ 00:14:27.583 { 00:14:27.583 "name": "BaseBdev1", 00:14:27.583 "aliases": [ 00:14:27.583 "69aee204-f254-4612-95ee-0e4dc3b69f63" 00:14:27.583 ], 00:14:27.583 "product_name": "Malloc disk", 00:14:27.583 "block_size": 512, 00:14:27.583 "num_blocks": 65536, 00:14:27.583 "uuid": "69aee204-f254-4612-95ee-0e4dc3b69f63", 00:14:27.583 "assigned_rate_limits": { 00:14:27.583 "rw_ios_per_sec": 0, 00:14:27.583 "rw_mbytes_per_sec": 0, 00:14:27.583 "r_mbytes_per_sec": 0, 00:14:27.583 "w_mbytes_per_sec": 0 00:14:27.583 }, 00:14:27.583 "claimed": true, 00:14:27.583 "claim_type": "exclusive_write", 00:14:27.583 "zoned": false, 00:14:27.583 "supported_io_types": { 00:14:27.583 "read": true, 00:14:27.583 "write": true, 00:14:27.583 "unmap": true, 00:14:27.583 "flush": true, 00:14:27.583 "reset": true, 00:14:27.583 "nvme_admin": false, 00:14:27.583 "nvme_io": false, 00:14:27.583 "nvme_io_md": false, 00:14:27.583 "write_zeroes": true, 00:14:27.583 "zcopy": true, 00:14:27.583 "get_zone_info": false, 00:14:27.583 "zone_management": false, 00:14:27.583 "zone_append": false, 00:14:27.583 "compare": false, 00:14:27.583 "compare_and_write": false, 00:14:27.583 "abort": true, 00:14:27.583 "seek_hole": false, 00:14:27.583 "seek_data": false, 00:14:27.583 "copy": true, 00:14:27.583 "nvme_iov_md": false 00:14:27.583 }, 00:14:27.583 "memory_domains": [ 00:14:27.583 { 00:14:27.583 "dma_device_id": "system", 00:14:27.583 "dma_device_type": 1 00:14:27.583 }, 00:14:27.583 { 00:14:27.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.583 "dma_device_type": 2 00:14:27.583 } 00:14:27.583 ], 00:14:27.583 "driver_specific": {} 00:14:27.583 } 00:14:27.583 ] 00:14:27.583 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.584 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:27.584 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:27.584 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.584 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:27.584 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:27.584 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.584 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.584 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.584 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.584 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.584 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.584 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.584 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.584 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.584 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.584 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.584 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.584 "name": "Existed_Raid", 00:14:27.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.584 "strip_size_kb": 64, 00:14:27.584 "state": "configuring", 00:14:27.584 "raid_level": "raid5f", 00:14:27.584 "superblock": false, 00:14:27.584 "num_base_bdevs": 3, 00:14:27.584 "num_base_bdevs_discovered": 2, 00:14:27.584 "num_base_bdevs_operational": 3, 00:14:27.584 "base_bdevs_list": [ 00:14:27.584 { 00:14:27.584 "name": "BaseBdev1", 00:14:27.584 "uuid": "69aee204-f254-4612-95ee-0e4dc3b69f63", 00:14:27.584 "is_configured": true, 00:14:27.584 "data_offset": 0, 00:14:27.584 "data_size": 65536 00:14:27.584 }, 00:14:27.584 { 00:14:27.584 "name": null, 00:14:27.584 "uuid": "d932e8ab-0b23-466a-aabe-90689f88ec5e", 00:14:27.584 "is_configured": false, 00:14:27.584 "data_offset": 0, 00:14:27.584 "data_size": 65536 00:14:27.584 }, 00:14:27.584 { 00:14:27.584 "name": "BaseBdev3", 00:14:27.584 "uuid": "3ce59e54-397c-4b8d-b36f-4ff4d2809c86", 00:14:27.584 "is_configured": true, 00:14:27.584 "data_offset": 0, 00:14:27.584 "data_size": 65536 00:14:27.584 } 00:14:27.584 ] 00:14:27.584 }' 00:14:27.584 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.584 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.153 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.153 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.153 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.153 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:28.153 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.153 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:28.153 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:28.153 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.153 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.153 [2024-11-21 03:23:15.472717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:28.153 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.153 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:28.153 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:28.153 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.153 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:28.153 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.153 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:28.153 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.153 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.153 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.153 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.153 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.153 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.153 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.153 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.153 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.153 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.153 "name": "Existed_Raid", 00:14:28.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.153 "strip_size_kb": 64, 00:14:28.153 "state": "configuring", 00:14:28.153 "raid_level": "raid5f", 00:14:28.153 "superblock": false, 00:14:28.153 "num_base_bdevs": 3, 00:14:28.153 "num_base_bdevs_discovered": 1, 00:14:28.153 "num_base_bdevs_operational": 3, 00:14:28.153 "base_bdevs_list": [ 00:14:28.153 { 00:14:28.153 "name": "BaseBdev1", 00:14:28.153 "uuid": "69aee204-f254-4612-95ee-0e4dc3b69f63", 00:14:28.153 "is_configured": true, 00:14:28.153 "data_offset": 0, 00:14:28.153 "data_size": 65536 00:14:28.153 }, 00:14:28.153 { 00:14:28.153 "name": null, 00:14:28.153 "uuid": "d932e8ab-0b23-466a-aabe-90689f88ec5e", 00:14:28.153 "is_configured": false, 00:14:28.153 "data_offset": 0, 00:14:28.153 "data_size": 65536 00:14:28.153 }, 00:14:28.153 { 00:14:28.153 "name": null, 00:14:28.153 "uuid": "3ce59e54-397c-4b8d-b36f-4ff4d2809c86", 00:14:28.153 "is_configured": false, 00:14:28.153 "data_offset": 0, 00:14:28.153 "data_size": 65536 00:14:28.153 } 00:14:28.153 ] 00:14:28.153 }' 00:14:28.153 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.153 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.413 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:28.413 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.413 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.413 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.413 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.413 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:28.413 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:28.413 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.413 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.413 [2024-11-21 03:23:15.924857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:28.413 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.413 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:28.413 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:28.413 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.413 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:28.413 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.413 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:28.413 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.413 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.413 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.413 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.413 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.413 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.413 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.413 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.413 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.672 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.672 "name": "Existed_Raid", 00:14:28.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.672 "strip_size_kb": 64, 00:14:28.672 "state": "configuring", 00:14:28.672 "raid_level": "raid5f", 00:14:28.672 "superblock": false, 00:14:28.672 "num_base_bdevs": 3, 00:14:28.672 "num_base_bdevs_discovered": 2, 00:14:28.672 "num_base_bdevs_operational": 3, 00:14:28.672 "base_bdevs_list": [ 00:14:28.672 { 00:14:28.672 "name": "BaseBdev1", 00:14:28.672 "uuid": "69aee204-f254-4612-95ee-0e4dc3b69f63", 00:14:28.672 "is_configured": true, 00:14:28.672 "data_offset": 0, 00:14:28.672 "data_size": 65536 00:14:28.672 }, 00:14:28.672 { 00:14:28.672 "name": null, 00:14:28.672 "uuid": "d932e8ab-0b23-466a-aabe-90689f88ec5e", 00:14:28.672 "is_configured": false, 00:14:28.672 "data_offset": 0, 00:14:28.672 "data_size": 65536 00:14:28.672 }, 00:14:28.672 { 00:14:28.672 "name": "BaseBdev3", 00:14:28.672 "uuid": "3ce59e54-397c-4b8d-b36f-4ff4d2809c86", 00:14:28.672 "is_configured": true, 00:14:28.672 "data_offset": 0, 00:14:28.672 "data_size": 65536 00:14:28.672 } 00:14:28.672 ] 00:14:28.672 }' 00:14:28.672 03:23:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.672 03:23:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.932 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.932 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:28.932 03:23:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.932 03:23:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.932 03:23:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.932 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:28.932 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:28.932 03:23:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.932 03:23:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.932 [2024-11-21 03:23:16.409046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:28.932 03:23:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.932 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:28.932 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:28.932 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.932 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:28.932 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.932 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:28.932 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.932 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.932 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.932 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.932 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.932 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.932 03:23:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.932 03:23:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.932 03:23:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.932 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.932 "name": "Existed_Raid", 00:14:28.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.932 "strip_size_kb": 64, 00:14:28.932 "state": "configuring", 00:14:28.932 "raid_level": "raid5f", 00:14:28.932 "superblock": false, 00:14:28.932 "num_base_bdevs": 3, 00:14:28.932 "num_base_bdevs_discovered": 1, 00:14:28.932 "num_base_bdevs_operational": 3, 00:14:28.932 "base_bdevs_list": [ 00:14:28.932 { 00:14:28.932 "name": null, 00:14:28.932 "uuid": "69aee204-f254-4612-95ee-0e4dc3b69f63", 00:14:28.932 "is_configured": false, 00:14:28.932 "data_offset": 0, 00:14:28.932 "data_size": 65536 00:14:28.932 }, 00:14:28.932 { 00:14:28.932 "name": null, 00:14:28.932 "uuid": "d932e8ab-0b23-466a-aabe-90689f88ec5e", 00:14:28.932 "is_configured": false, 00:14:28.932 "data_offset": 0, 00:14:28.932 "data_size": 65536 00:14:28.932 }, 00:14:28.932 { 00:14:28.932 "name": "BaseBdev3", 00:14:28.932 "uuid": "3ce59e54-397c-4b8d-b36f-4ff4d2809c86", 00:14:28.932 "is_configured": true, 00:14:28.932 "data_offset": 0, 00:14:28.932 "data_size": 65536 00:14:28.932 } 00:14:28.932 ] 00:14:28.932 }' 00:14:28.932 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.932 03:23:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.502 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:29.502 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.502 03:23:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.502 03:23:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.502 03:23:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.502 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:29.502 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:29.502 03:23:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.502 03:23:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.502 [2024-11-21 03:23:16.892346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:29.502 03:23:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.502 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:29.502 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:29.502 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:29.502 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:29.502 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.502 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:29.502 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.502 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.502 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.502 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.502 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.502 03:23:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.502 03:23:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.502 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.502 03:23:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.502 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.502 "name": "Existed_Raid", 00:14:29.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.502 "strip_size_kb": 64, 00:14:29.502 "state": "configuring", 00:14:29.502 "raid_level": "raid5f", 00:14:29.502 "superblock": false, 00:14:29.502 "num_base_bdevs": 3, 00:14:29.502 "num_base_bdevs_discovered": 2, 00:14:29.502 "num_base_bdevs_operational": 3, 00:14:29.502 "base_bdevs_list": [ 00:14:29.502 { 00:14:29.502 "name": null, 00:14:29.502 "uuid": "69aee204-f254-4612-95ee-0e4dc3b69f63", 00:14:29.502 "is_configured": false, 00:14:29.502 "data_offset": 0, 00:14:29.502 "data_size": 65536 00:14:29.502 }, 00:14:29.502 { 00:14:29.502 "name": "BaseBdev2", 00:14:29.502 "uuid": "d932e8ab-0b23-466a-aabe-90689f88ec5e", 00:14:29.502 "is_configured": true, 00:14:29.502 "data_offset": 0, 00:14:29.502 "data_size": 65536 00:14:29.502 }, 00:14:29.502 { 00:14:29.502 "name": "BaseBdev3", 00:14:29.502 "uuid": "3ce59e54-397c-4b8d-b36f-4ff4d2809c86", 00:14:29.502 "is_configured": true, 00:14:29.502 "data_offset": 0, 00:14:29.502 "data_size": 65536 00:14:29.502 } 00:14:29.502 ] 00:14:29.502 }' 00:14:29.502 03:23:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.502 03:23:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.078 03:23:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:30.078 03:23:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.078 03:23:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.078 03:23:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.078 03:23:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.078 03:23:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:30.078 03:23:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.078 03:23:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.078 03:23:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.078 03:23:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:30.078 03:23:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.078 03:23:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 69aee204-f254-4612-95ee-0e4dc3b69f63 00:14:30.078 03:23:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.078 03:23:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.079 [2024-11-21 03:23:17.441063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:30.079 [2024-11-21 03:23:17.441122] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:30.079 [2024-11-21 03:23:17.441130] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:30.079 [2024-11-21 03:23:17.441415] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:14:30.079 [2024-11-21 03:23:17.441874] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:30.079 [2024-11-21 03:23:17.441899] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:30.079 [2024-11-21 03:23:17.442097] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.079 NewBaseBdev 00:14:30.079 03:23:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.079 03:23:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:30.079 03:23:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:30.079 03:23:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:30.079 03:23:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:30.079 03:23:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:30.079 03:23:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:30.079 03:23:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:30.079 03:23:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.079 03:23:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.079 03:23:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.079 03:23:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:30.079 03:23:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.079 03:23:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.079 [ 00:14:30.079 { 00:14:30.079 "name": "NewBaseBdev", 00:14:30.079 "aliases": [ 00:14:30.079 "69aee204-f254-4612-95ee-0e4dc3b69f63" 00:14:30.079 ], 00:14:30.079 "product_name": "Malloc disk", 00:14:30.079 "block_size": 512, 00:14:30.079 "num_blocks": 65536, 00:14:30.079 "uuid": "69aee204-f254-4612-95ee-0e4dc3b69f63", 00:14:30.079 "assigned_rate_limits": { 00:14:30.079 "rw_ios_per_sec": 0, 00:14:30.079 "rw_mbytes_per_sec": 0, 00:14:30.079 "r_mbytes_per_sec": 0, 00:14:30.079 "w_mbytes_per_sec": 0 00:14:30.079 }, 00:14:30.079 "claimed": true, 00:14:30.079 "claim_type": "exclusive_write", 00:14:30.079 "zoned": false, 00:14:30.079 "supported_io_types": { 00:14:30.079 "read": true, 00:14:30.079 "write": true, 00:14:30.079 "unmap": true, 00:14:30.079 "flush": true, 00:14:30.079 "reset": true, 00:14:30.079 "nvme_admin": false, 00:14:30.079 "nvme_io": false, 00:14:30.079 "nvme_io_md": false, 00:14:30.079 "write_zeroes": true, 00:14:30.079 "zcopy": true, 00:14:30.079 "get_zone_info": false, 00:14:30.079 "zone_management": false, 00:14:30.079 "zone_append": false, 00:14:30.079 "compare": false, 00:14:30.079 "compare_and_write": false, 00:14:30.080 "abort": true, 00:14:30.080 "seek_hole": false, 00:14:30.080 "seek_data": false, 00:14:30.080 "copy": true, 00:14:30.080 "nvme_iov_md": false 00:14:30.080 }, 00:14:30.080 "memory_domains": [ 00:14:30.080 { 00:14:30.080 "dma_device_id": "system", 00:14:30.080 "dma_device_type": 1 00:14:30.080 }, 00:14:30.080 { 00:14:30.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.080 "dma_device_type": 2 00:14:30.080 } 00:14:30.080 ], 00:14:30.080 "driver_specific": {} 00:14:30.080 } 00:14:30.080 ] 00:14:30.080 03:23:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.080 03:23:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:30.080 03:23:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:30.080 03:23:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.080 03:23:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.080 03:23:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.080 03:23:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.080 03:23:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.080 03:23:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.080 03:23:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.080 03:23:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.080 03:23:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.080 03:23:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.080 03:23:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.080 03:23:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.080 03:23:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.080 03:23:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.080 03:23:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.080 "name": "Existed_Raid", 00:14:30.080 "uuid": "8192a7e7-c8dd-4220-b447-013bf1775812", 00:14:30.080 "strip_size_kb": 64, 00:14:30.080 "state": "online", 00:14:30.080 "raid_level": "raid5f", 00:14:30.080 "superblock": false, 00:14:30.080 "num_base_bdevs": 3, 00:14:30.080 "num_base_bdevs_discovered": 3, 00:14:30.080 "num_base_bdevs_operational": 3, 00:14:30.080 "base_bdevs_list": [ 00:14:30.080 { 00:14:30.080 "name": "NewBaseBdev", 00:14:30.080 "uuid": "69aee204-f254-4612-95ee-0e4dc3b69f63", 00:14:30.080 "is_configured": true, 00:14:30.080 "data_offset": 0, 00:14:30.080 "data_size": 65536 00:14:30.080 }, 00:14:30.080 { 00:14:30.080 "name": "BaseBdev2", 00:14:30.080 "uuid": "d932e8ab-0b23-466a-aabe-90689f88ec5e", 00:14:30.080 "is_configured": true, 00:14:30.080 "data_offset": 0, 00:14:30.080 "data_size": 65536 00:14:30.080 }, 00:14:30.080 { 00:14:30.080 "name": "BaseBdev3", 00:14:30.080 "uuid": "3ce59e54-397c-4b8d-b36f-4ff4d2809c86", 00:14:30.080 "is_configured": true, 00:14:30.080 "data_offset": 0, 00:14:30.080 "data_size": 65536 00:14:30.080 } 00:14:30.080 ] 00:14:30.080 }' 00:14:30.080 03:23:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.080 03:23:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.340 03:23:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:30.340 03:23:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:30.340 03:23:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:30.340 03:23:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:30.340 03:23:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:30.340 03:23:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:30.340 03:23:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:30.340 03:23:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:30.340 03:23:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.340 03:23:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.340 [2024-11-21 03:23:17.877399] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:30.340 03:23:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.340 03:23:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:30.340 "name": "Existed_Raid", 00:14:30.340 "aliases": [ 00:14:30.340 "8192a7e7-c8dd-4220-b447-013bf1775812" 00:14:30.340 ], 00:14:30.340 "product_name": "Raid Volume", 00:14:30.340 "block_size": 512, 00:14:30.340 "num_blocks": 131072, 00:14:30.340 "uuid": "8192a7e7-c8dd-4220-b447-013bf1775812", 00:14:30.340 "assigned_rate_limits": { 00:14:30.340 "rw_ios_per_sec": 0, 00:14:30.340 "rw_mbytes_per_sec": 0, 00:14:30.340 "r_mbytes_per_sec": 0, 00:14:30.340 "w_mbytes_per_sec": 0 00:14:30.340 }, 00:14:30.340 "claimed": false, 00:14:30.340 "zoned": false, 00:14:30.340 "supported_io_types": { 00:14:30.340 "read": true, 00:14:30.341 "write": true, 00:14:30.341 "unmap": false, 00:14:30.341 "flush": false, 00:14:30.341 "reset": true, 00:14:30.341 "nvme_admin": false, 00:14:30.341 "nvme_io": false, 00:14:30.341 "nvme_io_md": false, 00:14:30.341 "write_zeroes": true, 00:14:30.341 "zcopy": false, 00:14:30.341 "get_zone_info": false, 00:14:30.341 "zone_management": false, 00:14:30.341 "zone_append": false, 00:14:30.341 "compare": false, 00:14:30.341 "compare_and_write": false, 00:14:30.341 "abort": false, 00:14:30.341 "seek_hole": false, 00:14:30.341 "seek_data": false, 00:14:30.341 "copy": false, 00:14:30.341 "nvme_iov_md": false 00:14:30.341 }, 00:14:30.341 "driver_specific": { 00:14:30.341 "raid": { 00:14:30.341 "uuid": "8192a7e7-c8dd-4220-b447-013bf1775812", 00:14:30.341 "strip_size_kb": 64, 00:14:30.341 "state": "online", 00:14:30.341 "raid_level": "raid5f", 00:14:30.341 "superblock": false, 00:14:30.341 "num_base_bdevs": 3, 00:14:30.341 "num_base_bdevs_discovered": 3, 00:14:30.341 "num_base_bdevs_operational": 3, 00:14:30.341 "base_bdevs_list": [ 00:14:30.341 { 00:14:30.341 "name": "NewBaseBdev", 00:14:30.341 "uuid": "69aee204-f254-4612-95ee-0e4dc3b69f63", 00:14:30.341 "is_configured": true, 00:14:30.341 "data_offset": 0, 00:14:30.341 "data_size": 65536 00:14:30.341 }, 00:14:30.341 { 00:14:30.341 "name": "BaseBdev2", 00:14:30.341 "uuid": "d932e8ab-0b23-466a-aabe-90689f88ec5e", 00:14:30.341 "is_configured": true, 00:14:30.341 "data_offset": 0, 00:14:30.341 "data_size": 65536 00:14:30.341 }, 00:14:30.341 { 00:14:30.341 "name": "BaseBdev3", 00:14:30.341 "uuid": "3ce59e54-397c-4b8d-b36f-4ff4d2809c86", 00:14:30.341 "is_configured": true, 00:14:30.341 "data_offset": 0, 00:14:30.341 "data_size": 65536 00:14:30.341 } 00:14:30.341 ] 00:14:30.341 } 00:14:30.341 } 00:14:30.341 }' 00:14:30.341 03:23:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:30.600 03:23:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:30.601 BaseBdev2 00:14:30.601 BaseBdev3' 00:14:30.601 03:23:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.601 [2024-11-21 03:23:18.141273] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:30.601 [2024-11-21 03:23:18.141305] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:30.601 [2024-11-21 03:23:18.141385] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:30.601 [2024-11-21 03:23:18.141659] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:30.601 [2024-11-21 03:23:18.141678] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 92500 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 92500 ']' 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 92500 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:30.601 03:23:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92500 00:14:30.860 killing process with pid 92500 00:14:30.860 03:23:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:30.860 03:23:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:30.860 03:23:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92500' 00:14:30.860 03:23:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 92500 00:14:30.860 [2024-11-21 03:23:18.190131] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:30.860 03:23:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 92500 00:14:30.861 [2024-11-21 03:23:18.247908] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:31.120 00:14:31.120 real 0m8.565s 00:14:31.120 user 0m14.279s 00:14:31.120 sys 0m1.916s 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.120 ************************************ 00:14:31.120 END TEST raid5f_state_function_test 00:14:31.120 ************************************ 00:14:31.120 03:23:18 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:14:31.120 03:23:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:31.120 03:23:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:31.120 03:23:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:31.120 ************************************ 00:14:31.120 START TEST raid5f_state_function_test_sb 00:14:31.120 ************************************ 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=93100 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:31.120 Process raid pid: 93100 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 93100' 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 93100 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 93100 ']' 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.120 03:23:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:31.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.121 03:23:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.121 03:23:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:31.121 03:23:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.380 [2024-11-21 03:23:18.744922] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:14:31.380 [2024-11-21 03:23:18.745063] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.380 [2024-11-21 03:23:18.880051] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:31.380 [2024-11-21 03:23:18.903028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.640 [2024-11-21 03:23:18.945107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.640 [2024-11-21 03:23:19.020873] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:31.640 [2024-11-21 03:23:19.020912] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:32.209 03:23:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:32.209 03:23:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:32.209 03:23:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:32.209 03:23:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.209 03:23:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.209 [2024-11-21 03:23:19.573091] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:32.209 [2024-11-21 03:23:19.573149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:32.209 [2024-11-21 03:23:19.573162] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:32.209 [2024-11-21 03:23:19.573169] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:32.209 [2024-11-21 03:23:19.573183] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:32.209 [2024-11-21 03:23:19.573190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:32.209 03:23:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.209 03:23:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:32.209 03:23:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.209 03:23:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.209 03:23:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.209 03:23:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.209 03:23:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.209 03:23:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.209 03:23:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.209 03:23:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.209 03:23:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.209 03:23:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.209 03:23:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.209 03:23:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.209 03:23:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.209 03:23:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.209 03:23:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.209 "name": "Existed_Raid", 00:14:32.209 "uuid": "55320197-c00c-4e82-9639-9ae45554dd05", 00:14:32.209 "strip_size_kb": 64, 00:14:32.209 "state": "configuring", 00:14:32.209 "raid_level": "raid5f", 00:14:32.209 "superblock": true, 00:14:32.209 "num_base_bdevs": 3, 00:14:32.209 "num_base_bdevs_discovered": 0, 00:14:32.209 "num_base_bdevs_operational": 3, 00:14:32.209 "base_bdevs_list": [ 00:14:32.209 { 00:14:32.209 "name": "BaseBdev1", 00:14:32.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.209 "is_configured": false, 00:14:32.209 "data_offset": 0, 00:14:32.209 "data_size": 0 00:14:32.209 }, 00:14:32.209 { 00:14:32.209 "name": "BaseBdev2", 00:14:32.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.209 "is_configured": false, 00:14:32.209 "data_offset": 0, 00:14:32.209 "data_size": 0 00:14:32.209 }, 00:14:32.209 { 00:14:32.209 "name": "BaseBdev3", 00:14:32.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.209 "is_configured": false, 00:14:32.209 "data_offset": 0, 00:14:32.209 "data_size": 0 00:14:32.209 } 00:14:32.209 ] 00:14:32.209 }' 00:14:32.209 03:23:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.209 03:23:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.469 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:32.469 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.469 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.469 [2024-11-21 03:23:20.021158] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:32.469 [2024-11-21 03:23:20.021284] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:14:32.469 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.469 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:32.469 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.469 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.729 [2024-11-21 03:23:20.033167] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:32.729 [2024-11-21 03:23:20.033269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:32.729 [2024-11-21 03:23:20.033318] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:32.729 [2024-11-21 03:23:20.033339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:32.729 [2024-11-21 03:23:20.033360] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:32.729 [2024-11-21 03:23:20.033397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:32.729 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.729 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:32.729 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.729 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.729 [2024-11-21 03:23:20.060347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:32.729 BaseBdev1 00:14:32.729 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.729 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:32.730 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:32.730 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:32.730 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:32.730 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:32.730 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:32.730 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:32.730 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.730 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.730 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.730 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:32.730 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.730 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.730 [ 00:14:32.730 { 00:14:32.730 "name": "BaseBdev1", 00:14:32.730 "aliases": [ 00:14:32.730 "a7c35cfb-e11f-41f2-8695-c3dcdc143c29" 00:14:32.730 ], 00:14:32.730 "product_name": "Malloc disk", 00:14:32.730 "block_size": 512, 00:14:32.730 "num_blocks": 65536, 00:14:32.730 "uuid": "a7c35cfb-e11f-41f2-8695-c3dcdc143c29", 00:14:32.730 "assigned_rate_limits": { 00:14:32.730 "rw_ios_per_sec": 0, 00:14:32.730 "rw_mbytes_per_sec": 0, 00:14:32.730 "r_mbytes_per_sec": 0, 00:14:32.730 "w_mbytes_per_sec": 0 00:14:32.730 }, 00:14:32.730 "claimed": true, 00:14:32.730 "claim_type": "exclusive_write", 00:14:32.730 "zoned": false, 00:14:32.730 "supported_io_types": { 00:14:32.730 "read": true, 00:14:32.730 "write": true, 00:14:32.730 "unmap": true, 00:14:32.730 "flush": true, 00:14:32.730 "reset": true, 00:14:32.730 "nvme_admin": false, 00:14:32.730 "nvme_io": false, 00:14:32.730 "nvme_io_md": false, 00:14:32.730 "write_zeroes": true, 00:14:32.730 "zcopy": true, 00:14:32.730 "get_zone_info": false, 00:14:32.730 "zone_management": false, 00:14:32.730 "zone_append": false, 00:14:32.730 "compare": false, 00:14:32.730 "compare_and_write": false, 00:14:32.730 "abort": true, 00:14:32.730 "seek_hole": false, 00:14:32.730 "seek_data": false, 00:14:32.730 "copy": true, 00:14:32.730 "nvme_iov_md": false 00:14:32.730 }, 00:14:32.730 "memory_domains": [ 00:14:32.730 { 00:14:32.730 "dma_device_id": "system", 00:14:32.730 "dma_device_type": 1 00:14:32.730 }, 00:14:32.730 { 00:14:32.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.730 "dma_device_type": 2 00:14:32.730 } 00:14:32.730 ], 00:14:32.730 "driver_specific": {} 00:14:32.730 } 00:14:32.730 ] 00:14:32.730 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.730 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:32.730 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:32.730 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.730 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.730 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.730 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.730 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.730 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.730 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.730 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.730 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.730 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.730 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.730 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.730 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.730 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.730 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.730 "name": "Existed_Raid", 00:14:32.730 "uuid": "6cf8daed-aacb-445f-8478-aa553ebafef2", 00:14:32.730 "strip_size_kb": 64, 00:14:32.730 "state": "configuring", 00:14:32.730 "raid_level": "raid5f", 00:14:32.730 "superblock": true, 00:14:32.730 "num_base_bdevs": 3, 00:14:32.730 "num_base_bdevs_discovered": 1, 00:14:32.730 "num_base_bdevs_operational": 3, 00:14:32.730 "base_bdevs_list": [ 00:14:32.730 { 00:14:32.730 "name": "BaseBdev1", 00:14:32.730 "uuid": "a7c35cfb-e11f-41f2-8695-c3dcdc143c29", 00:14:32.730 "is_configured": true, 00:14:32.730 "data_offset": 2048, 00:14:32.730 "data_size": 63488 00:14:32.730 }, 00:14:32.730 { 00:14:32.730 "name": "BaseBdev2", 00:14:32.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.730 "is_configured": false, 00:14:32.730 "data_offset": 0, 00:14:32.730 "data_size": 0 00:14:32.730 }, 00:14:32.730 { 00:14:32.730 "name": "BaseBdev3", 00:14:32.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.730 "is_configured": false, 00:14:32.730 "data_offset": 0, 00:14:32.730 "data_size": 0 00:14:32.730 } 00:14:32.730 ] 00:14:32.730 }' 00:14:32.730 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.730 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.297 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:33.297 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.297 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.297 [2024-11-21 03:23:20.604558] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:33.297 [2024-11-21 03:23:20.604640] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:33.297 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.297 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:33.297 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.297 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.297 [2024-11-21 03:23:20.616577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:33.297 [2024-11-21 03:23:20.618743] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:33.297 [2024-11-21 03:23:20.618830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:33.297 [2024-11-21 03:23:20.618864] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:33.297 [2024-11-21 03:23:20.618891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:33.297 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.297 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:33.297 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:33.297 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:33.297 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.297 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:33.297 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.297 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.298 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.298 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.298 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.298 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.298 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.298 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.298 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.298 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.298 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.298 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.298 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.298 "name": "Existed_Raid", 00:14:33.298 "uuid": "de85f0e1-facd-4787-85e2-680a1fa2dba3", 00:14:33.298 "strip_size_kb": 64, 00:14:33.298 "state": "configuring", 00:14:33.298 "raid_level": "raid5f", 00:14:33.298 "superblock": true, 00:14:33.298 "num_base_bdevs": 3, 00:14:33.298 "num_base_bdevs_discovered": 1, 00:14:33.298 "num_base_bdevs_operational": 3, 00:14:33.298 "base_bdevs_list": [ 00:14:33.298 { 00:14:33.298 "name": "BaseBdev1", 00:14:33.298 "uuid": "a7c35cfb-e11f-41f2-8695-c3dcdc143c29", 00:14:33.298 "is_configured": true, 00:14:33.298 "data_offset": 2048, 00:14:33.298 "data_size": 63488 00:14:33.298 }, 00:14:33.298 { 00:14:33.298 "name": "BaseBdev2", 00:14:33.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.298 "is_configured": false, 00:14:33.298 "data_offset": 0, 00:14:33.298 "data_size": 0 00:14:33.298 }, 00:14:33.298 { 00:14:33.298 "name": "BaseBdev3", 00:14:33.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.298 "is_configured": false, 00:14:33.298 "data_offset": 0, 00:14:33.298 "data_size": 0 00:14:33.298 } 00:14:33.298 ] 00:14:33.298 }' 00:14:33.298 03:23:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.298 03:23:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.556 [2024-11-21 03:23:21.045344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:33.556 BaseBdev2 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.556 [ 00:14:33.556 { 00:14:33.556 "name": "BaseBdev2", 00:14:33.556 "aliases": [ 00:14:33.556 "958011ea-fc7f-46d6-a724-edc4791ffc05" 00:14:33.556 ], 00:14:33.556 "product_name": "Malloc disk", 00:14:33.556 "block_size": 512, 00:14:33.556 "num_blocks": 65536, 00:14:33.556 "uuid": "958011ea-fc7f-46d6-a724-edc4791ffc05", 00:14:33.556 "assigned_rate_limits": { 00:14:33.556 "rw_ios_per_sec": 0, 00:14:33.556 "rw_mbytes_per_sec": 0, 00:14:33.556 "r_mbytes_per_sec": 0, 00:14:33.556 "w_mbytes_per_sec": 0 00:14:33.556 }, 00:14:33.556 "claimed": true, 00:14:33.556 "claim_type": "exclusive_write", 00:14:33.556 "zoned": false, 00:14:33.556 "supported_io_types": { 00:14:33.556 "read": true, 00:14:33.556 "write": true, 00:14:33.556 "unmap": true, 00:14:33.556 "flush": true, 00:14:33.556 "reset": true, 00:14:33.556 "nvme_admin": false, 00:14:33.556 "nvme_io": false, 00:14:33.556 "nvme_io_md": false, 00:14:33.556 "write_zeroes": true, 00:14:33.556 "zcopy": true, 00:14:33.556 "get_zone_info": false, 00:14:33.556 "zone_management": false, 00:14:33.556 "zone_append": false, 00:14:33.556 "compare": false, 00:14:33.556 "compare_and_write": false, 00:14:33.556 "abort": true, 00:14:33.556 "seek_hole": false, 00:14:33.556 "seek_data": false, 00:14:33.556 "copy": true, 00:14:33.556 "nvme_iov_md": false 00:14:33.556 }, 00:14:33.556 "memory_domains": [ 00:14:33.556 { 00:14:33.556 "dma_device_id": "system", 00:14:33.556 "dma_device_type": 1 00:14:33.556 }, 00:14:33.556 { 00:14:33.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.556 "dma_device_type": 2 00:14:33.556 } 00:14:33.556 ], 00:14:33.556 "driver_specific": {} 00:14:33.556 } 00:14:33.556 ] 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.556 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.814 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.814 "name": "Existed_Raid", 00:14:33.814 "uuid": "de85f0e1-facd-4787-85e2-680a1fa2dba3", 00:14:33.814 "strip_size_kb": 64, 00:14:33.814 "state": "configuring", 00:14:33.814 "raid_level": "raid5f", 00:14:33.814 "superblock": true, 00:14:33.814 "num_base_bdevs": 3, 00:14:33.814 "num_base_bdevs_discovered": 2, 00:14:33.814 "num_base_bdevs_operational": 3, 00:14:33.814 "base_bdevs_list": [ 00:14:33.814 { 00:14:33.814 "name": "BaseBdev1", 00:14:33.814 "uuid": "a7c35cfb-e11f-41f2-8695-c3dcdc143c29", 00:14:33.814 "is_configured": true, 00:14:33.814 "data_offset": 2048, 00:14:33.814 "data_size": 63488 00:14:33.814 }, 00:14:33.814 { 00:14:33.814 "name": "BaseBdev2", 00:14:33.814 "uuid": "958011ea-fc7f-46d6-a724-edc4791ffc05", 00:14:33.814 "is_configured": true, 00:14:33.814 "data_offset": 2048, 00:14:33.814 "data_size": 63488 00:14:33.814 }, 00:14:33.814 { 00:14:33.814 "name": "BaseBdev3", 00:14:33.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.814 "is_configured": false, 00:14:33.814 "data_offset": 0, 00:14:33.814 "data_size": 0 00:14:33.814 } 00:14:33.814 ] 00:14:33.814 }' 00:14:33.814 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.814 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.072 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:34.072 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.072 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.072 [2024-11-21 03:23:21.565128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:34.072 [2024-11-21 03:23:21.565372] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:34.072 [2024-11-21 03:23:21.565395] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:34.072 [2024-11-21 03:23:21.565721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:34.072 BaseBdev3 00:14:34.072 [2024-11-21 03:23:21.566238] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:34.072 [2024-11-21 03:23:21.566260] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:14:34.072 [2024-11-21 03:23:21.566406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.072 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.072 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:34.072 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:34.072 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:34.072 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:34.072 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:34.072 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:34.072 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:34.072 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.072 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.072 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.072 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:34.072 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.072 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.072 [ 00:14:34.072 { 00:14:34.072 "name": "BaseBdev3", 00:14:34.072 "aliases": [ 00:14:34.072 "b487368e-aacb-4db8-9bc9-c332dac51bb9" 00:14:34.072 ], 00:14:34.072 "product_name": "Malloc disk", 00:14:34.072 "block_size": 512, 00:14:34.072 "num_blocks": 65536, 00:14:34.072 "uuid": "b487368e-aacb-4db8-9bc9-c332dac51bb9", 00:14:34.072 "assigned_rate_limits": { 00:14:34.072 "rw_ios_per_sec": 0, 00:14:34.072 "rw_mbytes_per_sec": 0, 00:14:34.072 "r_mbytes_per_sec": 0, 00:14:34.072 "w_mbytes_per_sec": 0 00:14:34.072 }, 00:14:34.072 "claimed": true, 00:14:34.072 "claim_type": "exclusive_write", 00:14:34.072 "zoned": false, 00:14:34.072 "supported_io_types": { 00:14:34.072 "read": true, 00:14:34.072 "write": true, 00:14:34.072 "unmap": true, 00:14:34.072 "flush": true, 00:14:34.072 "reset": true, 00:14:34.072 "nvme_admin": false, 00:14:34.072 "nvme_io": false, 00:14:34.072 "nvme_io_md": false, 00:14:34.072 "write_zeroes": true, 00:14:34.072 "zcopy": true, 00:14:34.072 "get_zone_info": false, 00:14:34.072 "zone_management": false, 00:14:34.072 "zone_append": false, 00:14:34.072 "compare": false, 00:14:34.072 "compare_and_write": false, 00:14:34.072 "abort": true, 00:14:34.072 "seek_hole": false, 00:14:34.072 "seek_data": false, 00:14:34.072 "copy": true, 00:14:34.072 "nvme_iov_md": false 00:14:34.072 }, 00:14:34.072 "memory_domains": [ 00:14:34.072 { 00:14:34.072 "dma_device_id": "system", 00:14:34.072 "dma_device_type": 1 00:14:34.072 }, 00:14:34.072 { 00:14:34.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.072 "dma_device_type": 2 00:14:34.072 } 00:14:34.072 ], 00:14:34.072 "driver_specific": {} 00:14:34.072 } 00:14:34.072 ] 00:14:34.072 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.072 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:34.072 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:34.072 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:34.072 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:34.072 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.072 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.072 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:34.072 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.073 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:34.073 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.073 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.073 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.073 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.073 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.073 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.073 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.073 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.073 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.330 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.330 "name": "Existed_Raid", 00:14:34.330 "uuid": "de85f0e1-facd-4787-85e2-680a1fa2dba3", 00:14:34.330 "strip_size_kb": 64, 00:14:34.330 "state": "online", 00:14:34.330 "raid_level": "raid5f", 00:14:34.330 "superblock": true, 00:14:34.330 "num_base_bdevs": 3, 00:14:34.330 "num_base_bdevs_discovered": 3, 00:14:34.330 "num_base_bdevs_operational": 3, 00:14:34.330 "base_bdevs_list": [ 00:14:34.330 { 00:14:34.330 "name": "BaseBdev1", 00:14:34.330 "uuid": "a7c35cfb-e11f-41f2-8695-c3dcdc143c29", 00:14:34.330 "is_configured": true, 00:14:34.330 "data_offset": 2048, 00:14:34.330 "data_size": 63488 00:14:34.331 }, 00:14:34.331 { 00:14:34.331 "name": "BaseBdev2", 00:14:34.331 "uuid": "958011ea-fc7f-46d6-a724-edc4791ffc05", 00:14:34.331 "is_configured": true, 00:14:34.331 "data_offset": 2048, 00:14:34.331 "data_size": 63488 00:14:34.331 }, 00:14:34.331 { 00:14:34.331 "name": "BaseBdev3", 00:14:34.331 "uuid": "b487368e-aacb-4db8-9bc9-c332dac51bb9", 00:14:34.331 "is_configured": true, 00:14:34.331 "data_offset": 2048, 00:14:34.331 "data_size": 63488 00:14:34.331 } 00:14:34.331 ] 00:14:34.331 }' 00:14:34.331 03:23:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.331 03:23:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.588 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:34.588 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:34.588 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:34.588 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:34.588 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:34.588 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:34.588 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:34.589 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:34.589 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.589 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.589 [2024-11-21 03:23:22.033441] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:34.589 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.589 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:34.589 "name": "Existed_Raid", 00:14:34.589 "aliases": [ 00:14:34.589 "de85f0e1-facd-4787-85e2-680a1fa2dba3" 00:14:34.589 ], 00:14:34.589 "product_name": "Raid Volume", 00:14:34.589 "block_size": 512, 00:14:34.589 "num_blocks": 126976, 00:14:34.589 "uuid": "de85f0e1-facd-4787-85e2-680a1fa2dba3", 00:14:34.589 "assigned_rate_limits": { 00:14:34.589 "rw_ios_per_sec": 0, 00:14:34.589 "rw_mbytes_per_sec": 0, 00:14:34.589 "r_mbytes_per_sec": 0, 00:14:34.589 "w_mbytes_per_sec": 0 00:14:34.589 }, 00:14:34.589 "claimed": false, 00:14:34.589 "zoned": false, 00:14:34.589 "supported_io_types": { 00:14:34.589 "read": true, 00:14:34.589 "write": true, 00:14:34.589 "unmap": false, 00:14:34.589 "flush": false, 00:14:34.589 "reset": true, 00:14:34.589 "nvme_admin": false, 00:14:34.589 "nvme_io": false, 00:14:34.589 "nvme_io_md": false, 00:14:34.589 "write_zeroes": true, 00:14:34.589 "zcopy": false, 00:14:34.589 "get_zone_info": false, 00:14:34.589 "zone_management": false, 00:14:34.589 "zone_append": false, 00:14:34.589 "compare": false, 00:14:34.589 "compare_and_write": false, 00:14:34.589 "abort": false, 00:14:34.589 "seek_hole": false, 00:14:34.589 "seek_data": false, 00:14:34.589 "copy": false, 00:14:34.589 "nvme_iov_md": false 00:14:34.589 }, 00:14:34.589 "driver_specific": { 00:14:34.589 "raid": { 00:14:34.589 "uuid": "de85f0e1-facd-4787-85e2-680a1fa2dba3", 00:14:34.589 "strip_size_kb": 64, 00:14:34.589 "state": "online", 00:14:34.589 "raid_level": "raid5f", 00:14:34.589 "superblock": true, 00:14:34.589 "num_base_bdevs": 3, 00:14:34.589 "num_base_bdevs_discovered": 3, 00:14:34.589 "num_base_bdevs_operational": 3, 00:14:34.589 "base_bdevs_list": [ 00:14:34.589 { 00:14:34.589 "name": "BaseBdev1", 00:14:34.589 "uuid": "a7c35cfb-e11f-41f2-8695-c3dcdc143c29", 00:14:34.589 "is_configured": true, 00:14:34.589 "data_offset": 2048, 00:14:34.589 "data_size": 63488 00:14:34.589 }, 00:14:34.589 { 00:14:34.589 "name": "BaseBdev2", 00:14:34.589 "uuid": "958011ea-fc7f-46d6-a724-edc4791ffc05", 00:14:34.589 "is_configured": true, 00:14:34.589 "data_offset": 2048, 00:14:34.589 "data_size": 63488 00:14:34.589 }, 00:14:34.589 { 00:14:34.589 "name": "BaseBdev3", 00:14:34.589 "uuid": "b487368e-aacb-4db8-9bc9-c332dac51bb9", 00:14:34.589 "is_configured": true, 00:14:34.589 "data_offset": 2048, 00:14:34.589 "data_size": 63488 00:14:34.589 } 00:14:34.589 ] 00:14:34.589 } 00:14:34.589 } 00:14:34.589 }' 00:14:34.589 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:34.589 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:34.589 BaseBdev2 00:14:34.589 BaseBdev3' 00:14:34.589 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:34.589 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:34.589 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:34.589 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:34.589 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:34.589 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.589 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.589 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.848 [2024-11-21 03:23:22.281383] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.848 "name": "Existed_Raid", 00:14:34.848 "uuid": "de85f0e1-facd-4787-85e2-680a1fa2dba3", 00:14:34.848 "strip_size_kb": 64, 00:14:34.848 "state": "online", 00:14:34.848 "raid_level": "raid5f", 00:14:34.848 "superblock": true, 00:14:34.848 "num_base_bdevs": 3, 00:14:34.848 "num_base_bdevs_discovered": 2, 00:14:34.848 "num_base_bdevs_operational": 2, 00:14:34.848 "base_bdevs_list": [ 00:14:34.848 { 00:14:34.848 "name": null, 00:14:34.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.848 "is_configured": false, 00:14:34.848 "data_offset": 0, 00:14:34.848 "data_size": 63488 00:14:34.848 }, 00:14:34.848 { 00:14:34.848 "name": "BaseBdev2", 00:14:34.848 "uuid": "958011ea-fc7f-46d6-a724-edc4791ffc05", 00:14:34.848 "is_configured": true, 00:14:34.848 "data_offset": 2048, 00:14:34.848 "data_size": 63488 00:14:34.848 }, 00:14:34.848 { 00:14:34.848 "name": "BaseBdev3", 00:14:34.848 "uuid": "b487368e-aacb-4db8-9bc9-c332dac51bb9", 00:14:34.848 "is_configured": true, 00:14:34.848 "data_offset": 2048, 00:14:34.848 "data_size": 63488 00:14:34.848 } 00:14:34.848 ] 00:14:34.848 }' 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.848 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.416 [2024-11-21 03:23:22.745813] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:35.416 [2024-11-21 03:23:22.746029] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:35.416 [2024-11-21 03:23:22.766505] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.416 [2024-11-21 03:23:22.826600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:35.416 [2024-11-21 03:23:22.826727] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.416 BaseBdev2 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.416 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.416 [ 00:14:35.416 { 00:14:35.416 "name": "BaseBdev2", 00:14:35.416 "aliases": [ 00:14:35.416 "cf95b869-b52d-4987-bf96-ca9b57d2d1d5" 00:14:35.416 ], 00:14:35.416 "product_name": "Malloc disk", 00:14:35.416 "block_size": 512, 00:14:35.416 "num_blocks": 65536, 00:14:35.416 "uuid": "cf95b869-b52d-4987-bf96-ca9b57d2d1d5", 00:14:35.416 "assigned_rate_limits": { 00:14:35.416 "rw_ios_per_sec": 0, 00:14:35.416 "rw_mbytes_per_sec": 0, 00:14:35.416 "r_mbytes_per_sec": 0, 00:14:35.416 "w_mbytes_per_sec": 0 00:14:35.416 }, 00:14:35.416 "claimed": false, 00:14:35.416 "zoned": false, 00:14:35.416 "supported_io_types": { 00:14:35.416 "read": true, 00:14:35.416 "write": true, 00:14:35.416 "unmap": true, 00:14:35.416 "flush": true, 00:14:35.416 "reset": true, 00:14:35.416 "nvme_admin": false, 00:14:35.417 "nvme_io": false, 00:14:35.417 "nvme_io_md": false, 00:14:35.417 "write_zeroes": true, 00:14:35.417 "zcopy": true, 00:14:35.417 "get_zone_info": false, 00:14:35.417 "zone_management": false, 00:14:35.417 "zone_append": false, 00:14:35.417 "compare": false, 00:14:35.417 "compare_and_write": false, 00:14:35.417 "abort": true, 00:14:35.417 "seek_hole": false, 00:14:35.417 "seek_data": false, 00:14:35.417 "copy": true, 00:14:35.417 "nvme_iov_md": false 00:14:35.417 }, 00:14:35.417 "memory_domains": [ 00:14:35.417 { 00:14:35.417 "dma_device_id": "system", 00:14:35.417 "dma_device_type": 1 00:14:35.417 }, 00:14:35.417 { 00:14:35.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.417 "dma_device_type": 2 00:14:35.417 } 00:14:35.417 ], 00:14:35.417 "driver_specific": {} 00:14:35.417 } 00:14:35.417 ] 00:14:35.417 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.417 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:35.417 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:35.417 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:35.417 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:35.417 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.417 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.675 BaseBdev3 00:14:35.675 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.675 03:23:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:35.675 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:35.675 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:35.675 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:35.675 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:35.675 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:35.675 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:35.675 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.675 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.675 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.675 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:35.675 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.675 03:23:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.675 [ 00:14:35.675 { 00:14:35.675 "name": "BaseBdev3", 00:14:35.675 "aliases": [ 00:14:35.675 "5362eacf-5fe5-412b-8736-3cc6f6367caa" 00:14:35.675 ], 00:14:35.675 "product_name": "Malloc disk", 00:14:35.675 "block_size": 512, 00:14:35.675 "num_blocks": 65536, 00:14:35.675 "uuid": "5362eacf-5fe5-412b-8736-3cc6f6367caa", 00:14:35.675 "assigned_rate_limits": { 00:14:35.675 "rw_ios_per_sec": 0, 00:14:35.675 "rw_mbytes_per_sec": 0, 00:14:35.675 "r_mbytes_per_sec": 0, 00:14:35.675 "w_mbytes_per_sec": 0 00:14:35.675 }, 00:14:35.675 "claimed": false, 00:14:35.675 "zoned": false, 00:14:35.675 "supported_io_types": { 00:14:35.675 "read": true, 00:14:35.675 "write": true, 00:14:35.675 "unmap": true, 00:14:35.675 "flush": true, 00:14:35.675 "reset": true, 00:14:35.675 "nvme_admin": false, 00:14:35.675 "nvme_io": false, 00:14:35.675 "nvme_io_md": false, 00:14:35.675 "write_zeroes": true, 00:14:35.675 "zcopy": true, 00:14:35.675 "get_zone_info": false, 00:14:35.675 "zone_management": false, 00:14:35.675 "zone_append": false, 00:14:35.675 "compare": false, 00:14:35.675 "compare_and_write": false, 00:14:35.675 "abort": true, 00:14:35.675 "seek_hole": false, 00:14:35.675 "seek_data": false, 00:14:35.675 "copy": true, 00:14:35.675 "nvme_iov_md": false 00:14:35.675 }, 00:14:35.675 "memory_domains": [ 00:14:35.675 { 00:14:35.675 "dma_device_id": "system", 00:14:35.675 "dma_device_type": 1 00:14:35.675 }, 00:14:35.675 { 00:14:35.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.675 "dma_device_type": 2 00:14:35.675 } 00:14:35.675 ], 00:14:35.675 "driver_specific": {} 00:14:35.675 } 00:14:35.675 ] 00:14:35.675 03:23:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.675 03:23:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:35.675 03:23:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:35.675 03:23:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:35.675 03:23:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:35.675 03:23:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.675 03:23:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.675 [2024-11-21 03:23:23.025949] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:35.675 [2024-11-21 03:23:23.026106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:35.675 [2024-11-21 03:23:23.026169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:35.675 [2024-11-21 03:23:23.028274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:35.675 03:23:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.675 03:23:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:35.675 03:23:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:35.675 03:23:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:35.675 03:23:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:35.675 03:23:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.675 03:23:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:35.675 03:23:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.675 03:23:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.675 03:23:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.675 03:23:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.675 03:23:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.675 03:23:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.675 03:23:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.675 03:23:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.675 03:23:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.675 03:23:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.675 "name": "Existed_Raid", 00:14:35.675 "uuid": "7e6bcd79-524a-4a07-9863-7ff84ae0faf8", 00:14:35.675 "strip_size_kb": 64, 00:14:35.675 "state": "configuring", 00:14:35.675 "raid_level": "raid5f", 00:14:35.675 "superblock": true, 00:14:35.675 "num_base_bdevs": 3, 00:14:35.675 "num_base_bdevs_discovered": 2, 00:14:35.675 "num_base_bdevs_operational": 3, 00:14:35.675 "base_bdevs_list": [ 00:14:35.675 { 00:14:35.675 "name": "BaseBdev1", 00:14:35.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.675 "is_configured": false, 00:14:35.675 "data_offset": 0, 00:14:35.675 "data_size": 0 00:14:35.675 }, 00:14:35.675 { 00:14:35.675 "name": "BaseBdev2", 00:14:35.675 "uuid": "cf95b869-b52d-4987-bf96-ca9b57d2d1d5", 00:14:35.675 "is_configured": true, 00:14:35.675 "data_offset": 2048, 00:14:35.675 "data_size": 63488 00:14:35.675 }, 00:14:35.675 { 00:14:35.675 "name": "BaseBdev3", 00:14:35.675 "uuid": "5362eacf-5fe5-412b-8736-3cc6f6367caa", 00:14:35.675 "is_configured": true, 00:14:35.675 "data_offset": 2048, 00:14:35.675 "data_size": 63488 00:14:35.675 } 00:14:35.675 ] 00:14:35.675 }' 00:14:35.675 03:23:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.675 03:23:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.933 03:23:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:35.933 03:23:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.933 03:23:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.191 [2024-11-21 03:23:23.502046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:36.192 03:23:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.192 03:23:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:36.192 03:23:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.192 03:23:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.192 03:23:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.192 03:23:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.192 03:23:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.192 03:23:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.192 03:23:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.192 03:23:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.192 03:23:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.192 03:23:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.192 03:23:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.192 03:23:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.192 03:23:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.192 03:23:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.192 03:23:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.192 "name": "Existed_Raid", 00:14:36.192 "uuid": "7e6bcd79-524a-4a07-9863-7ff84ae0faf8", 00:14:36.192 "strip_size_kb": 64, 00:14:36.192 "state": "configuring", 00:14:36.192 "raid_level": "raid5f", 00:14:36.192 "superblock": true, 00:14:36.192 "num_base_bdevs": 3, 00:14:36.192 "num_base_bdevs_discovered": 1, 00:14:36.192 "num_base_bdevs_operational": 3, 00:14:36.192 "base_bdevs_list": [ 00:14:36.192 { 00:14:36.192 "name": "BaseBdev1", 00:14:36.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.192 "is_configured": false, 00:14:36.192 "data_offset": 0, 00:14:36.192 "data_size": 0 00:14:36.192 }, 00:14:36.192 { 00:14:36.192 "name": null, 00:14:36.192 "uuid": "cf95b869-b52d-4987-bf96-ca9b57d2d1d5", 00:14:36.192 "is_configured": false, 00:14:36.192 "data_offset": 0, 00:14:36.192 "data_size": 63488 00:14:36.192 }, 00:14:36.192 { 00:14:36.192 "name": "BaseBdev3", 00:14:36.192 "uuid": "5362eacf-5fe5-412b-8736-3cc6f6367caa", 00:14:36.192 "is_configured": true, 00:14:36.192 "data_offset": 2048, 00:14:36.192 "data_size": 63488 00:14:36.192 } 00:14:36.192 ] 00:14:36.192 }' 00:14:36.192 03:23:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.192 03:23:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.451 03:23:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.451 03:23:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.451 03:23:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.451 03:23:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:36.451 03:23:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.710 [2024-11-21 03:23:24.038905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:36.710 BaseBdev1 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.710 [ 00:14:36.710 { 00:14:36.710 "name": "BaseBdev1", 00:14:36.710 "aliases": [ 00:14:36.710 "82150312-d651-46a3-8828-fa0e37d7cc8d" 00:14:36.710 ], 00:14:36.710 "product_name": "Malloc disk", 00:14:36.710 "block_size": 512, 00:14:36.710 "num_blocks": 65536, 00:14:36.710 "uuid": "82150312-d651-46a3-8828-fa0e37d7cc8d", 00:14:36.710 "assigned_rate_limits": { 00:14:36.710 "rw_ios_per_sec": 0, 00:14:36.710 "rw_mbytes_per_sec": 0, 00:14:36.710 "r_mbytes_per_sec": 0, 00:14:36.710 "w_mbytes_per_sec": 0 00:14:36.710 }, 00:14:36.710 "claimed": true, 00:14:36.710 "claim_type": "exclusive_write", 00:14:36.710 "zoned": false, 00:14:36.710 "supported_io_types": { 00:14:36.710 "read": true, 00:14:36.710 "write": true, 00:14:36.710 "unmap": true, 00:14:36.710 "flush": true, 00:14:36.710 "reset": true, 00:14:36.710 "nvme_admin": false, 00:14:36.710 "nvme_io": false, 00:14:36.710 "nvme_io_md": false, 00:14:36.710 "write_zeroes": true, 00:14:36.710 "zcopy": true, 00:14:36.710 "get_zone_info": false, 00:14:36.710 "zone_management": false, 00:14:36.710 "zone_append": false, 00:14:36.710 "compare": false, 00:14:36.710 "compare_and_write": false, 00:14:36.710 "abort": true, 00:14:36.710 "seek_hole": false, 00:14:36.710 "seek_data": false, 00:14:36.710 "copy": true, 00:14:36.710 "nvme_iov_md": false 00:14:36.710 }, 00:14:36.710 "memory_domains": [ 00:14:36.710 { 00:14:36.710 "dma_device_id": "system", 00:14:36.710 "dma_device_type": 1 00:14:36.710 }, 00:14:36.710 { 00:14:36.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.710 "dma_device_type": 2 00:14:36.710 } 00:14:36.710 ], 00:14:36.710 "driver_specific": {} 00:14:36.710 } 00:14:36.710 ] 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.710 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.710 "name": "Existed_Raid", 00:14:36.710 "uuid": "7e6bcd79-524a-4a07-9863-7ff84ae0faf8", 00:14:36.710 "strip_size_kb": 64, 00:14:36.710 "state": "configuring", 00:14:36.710 "raid_level": "raid5f", 00:14:36.710 "superblock": true, 00:14:36.710 "num_base_bdevs": 3, 00:14:36.710 "num_base_bdevs_discovered": 2, 00:14:36.710 "num_base_bdevs_operational": 3, 00:14:36.710 "base_bdevs_list": [ 00:14:36.710 { 00:14:36.710 "name": "BaseBdev1", 00:14:36.710 "uuid": "82150312-d651-46a3-8828-fa0e37d7cc8d", 00:14:36.710 "is_configured": true, 00:14:36.710 "data_offset": 2048, 00:14:36.710 "data_size": 63488 00:14:36.710 }, 00:14:36.710 { 00:14:36.710 "name": null, 00:14:36.710 "uuid": "cf95b869-b52d-4987-bf96-ca9b57d2d1d5", 00:14:36.710 "is_configured": false, 00:14:36.710 "data_offset": 0, 00:14:36.710 "data_size": 63488 00:14:36.710 }, 00:14:36.711 { 00:14:36.711 "name": "BaseBdev3", 00:14:36.711 "uuid": "5362eacf-5fe5-412b-8736-3cc6f6367caa", 00:14:36.711 "is_configured": true, 00:14:36.711 "data_offset": 2048, 00:14:36.711 "data_size": 63488 00:14:36.711 } 00:14:36.711 ] 00:14:36.711 }' 00:14:36.711 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.711 03:23:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.969 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:36.969 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.969 03:23:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.969 03:23:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.969 03:23:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.228 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:37.228 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:37.228 03:23:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.229 03:23:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.229 [2024-11-21 03:23:24.547086] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:37.229 03:23:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.229 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:37.229 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.229 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:37.229 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.229 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.229 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.229 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.229 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.229 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.229 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.229 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.229 03:23:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.229 03:23:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.229 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.229 03:23:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.229 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.229 "name": "Existed_Raid", 00:14:37.229 "uuid": "7e6bcd79-524a-4a07-9863-7ff84ae0faf8", 00:14:37.229 "strip_size_kb": 64, 00:14:37.229 "state": "configuring", 00:14:37.229 "raid_level": "raid5f", 00:14:37.229 "superblock": true, 00:14:37.229 "num_base_bdevs": 3, 00:14:37.229 "num_base_bdevs_discovered": 1, 00:14:37.229 "num_base_bdevs_operational": 3, 00:14:37.229 "base_bdevs_list": [ 00:14:37.229 { 00:14:37.229 "name": "BaseBdev1", 00:14:37.229 "uuid": "82150312-d651-46a3-8828-fa0e37d7cc8d", 00:14:37.229 "is_configured": true, 00:14:37.229 "data_offset": 2048, 00:14:37.229 "data_size": 63488 00:14:37.229 }, 00:14:37.229 { 00:14:37.229 "name": null, 00:14:37.229 "uuid": "cf95b869-b52d-4987-bf96-ca9b57d2d1d5", 00:14:37.229 "is_configured": false, 00:14:37.229 "data_offset": 0, 00:14:37.229 "data_size": 63488 00:14:37.229 }, 00:14:37.229 { 00:14:37.229 "name": null, 00:14:37.229 "uuid": "5362eacf-5fe5-412b-8736-3cc6f6367caa", 00:14:37.229 "is_configured": false, 00:14:37.229 "data_offset": 0, 00:14:37.229 "data_size": 63488 00:14:37.229 } 00:14:37.229 ] 00:14:37.229 }' 00:14:37.229 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.229 03:23:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.489 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.489 03:23:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:37.489 03:23:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.489 03:23:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.489 03:23:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.489 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:37.489 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:37.489 03:23:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.489 03:23:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.489 [2024-11-21 03:23:25.027242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:37.489 03:23:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.489 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:37.489 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.489 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:37.489 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.489 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.489 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.489 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.489 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.489 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.489 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.489 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.489 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.489 03:23:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.489 03:23:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.748 03:23:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.748 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.748 "name": "Existed_Raid", 00:14:37.748 "uuid": "7e6bcd79-524a-4a07-9863-7ff84ae0faf8", 00:14:37.748 "strip_size_kb": 64, 00:14:37.748 "state": "configuring", 00:14:37.748 "raid_level": "raid5f", 00:14:37.748 "superblock": true, 00:14:37.748 "num_base_bdevs": 3, 00:14:37.748 "num_base_bdevs_discovered": 2, 00:14:37.748 "num_base_bdevs_operational": 3, 00:14:37.748 "base_bdevs_list": [ 00:14:37.748 { 00:14:37.748 "name": "BaseBdev1", 00:14:37.748 "uuid": "82150312-d651-46a3-8828-fa0e37d7cc8d", 00:14:37.748 "is_configured": true, 00:14:37.748 "data_offset": 2048, 00:14:37.748 "data_size": 63488 00:14:37.748 }, 00:14:37.748 { 00:14:37.748 "name": null, 00:14:37.748 "uuid": "cf95b869-b52d-4987-bf96-ca9b57d2d1d5", 00:14:37.748 "is_configured": false, 00:14:37.748 "data_offset": 0, 00:14:37.748 "data_size": 63488 00:14:37.748 }, 00:14:37.748 { 00:14:37.748 "name": "BaseBdev3", 00:14:37.748 "uuid": "5362eacf-5fe5-412b-8736-3cc6f6367caa", 00:14:37.748 "is_configured": true, 00:14:37.748 "data_offset": 2048, 00:14:37.748 "data_size": 63488 00:14:37.748 } 00:14:37.748 ] 00:14:37.748 }' 00:14:37.748 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.748 03:23:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.008 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.008 03:23:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.008 03:23:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.008 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:38.008 03:23:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.008 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:38.008 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:38.008 03:23:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.008 03:23:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.008 [2024-11-21 03:23:25.447355] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:38.008 03:23:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.008 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:38.008 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.008 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.008 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.008 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.008 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.008 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.008 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.008 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.008 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.008 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.008 03:23:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.008 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.008 03:23:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.008 03:23:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.008 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.008 "name": "Existed_Raid", 00:14:38.008 "uuid": "7e6bcd79-524a-4a07-9863-7ff84ae0faf8", 00:14:38.008 "strip_size_kb": 64, 00:14:38.008 "state": "configuring", 00:14:38.008 "raid_level": "raid5f", 00:14:38.008 "superblock": true, 00:14:38.008 "num_base_bdevs": 3, 00:14:38.008 "num_base_bdevs_discovered": 1, 00:14:38.008 "num_base_bdevs_operational": 3, 00:14:38.008 "base_bdevs_list": [ 00:14:38.008 { 00:14:38.008 "name": null, 00:14:38.008 "uuid": "82150312-d651-46a3-8828-fa0e37d7cc8d", 00:14:38.008 "is_configured": false, 00:14:38.008 "data_offset": 0, 00:14:38.008 "data_size": 63488 00:14:38.008 }, 00:14:38.008 { 00:14:38.008 "name": null, 00:14:38.008 "uuid": "cf95b869-b52d-4987-bf96-ca9b57d2d1d5", 00:14:38.008 "is_configured": false, 00:14:38.008 "data_offset": 0, 00:14:38.008 "data_size": 63488 00:14:38.008 }, 00:14:38.008 { 00:14:38.008 "name": "BaseBdev3", 00:14:38.008 "uuid": "5362eacf-5fe5-412b-8736-3cc6f6367caa", 00:14:38.008 "is_configured": true, 00:14:38.008 "data_offset": 2048, 00:14:38.008 "data_size": 63488 00:14:38.008 } 00:14:38.008 ] 00:14:38.008 }' 00:14:38.008 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.008 03:23:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.592 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.592 03:23:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.592 03:23:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.592 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:38.592 03:23:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.592 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:38.592 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:38.592 03:23:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.592 03:23:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.592 [2024-11-21 03:23:25.930945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:38.592 03:23:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.592 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:38.592 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.592 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.592 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.592 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.592 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.592 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.592 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.592 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.592 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.592 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.592 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.593 03:23:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.593 03:23:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.593 03:23:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.593 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.593 "name": "Existed_Raid", 00:14:38.593 "uuid": "7e6bcd79-524a-4a07-9863-7ff84ae0faf8", 00:14:38.593 "strip_size_kb": 64, 00:14:38.593 "state": "configuring", 00:14:38.593 "raid_level": "raid5f", 00:14:38.593 "superblock": true, 00:14:38.593 "num_base_bdevs": 3, 00:14:38.593 "num_base_bdevs_discovered": 2, 00:14:38.593 "num_base_bdevs_operational": 3, 00:14:38.593 "base_bdevs_list": [ 00:14:38.593 { 00:14:38.593 "name": null, 00:14:38.593 "uuid": "82150312-d651-46a3-8828-fa0e37d7cc8d", 00:14:38.593 "is_configured": false, 00:14:38.593 "data_offset": 0, 00:14:38.593 "data_size": 63488 00:14:38.593 }, 00:14:38.593 { 00:14:38.593 "name": "BaseBdev2", 00:14:38.593 "uuid": "cf95b869-b52d-4987-bf96-ca9b57d2d1d5", 00:14:38.593 "is_configured": true, 00:14:38.593 "data_offset": 2048, 00:14:38.593 "data_size": 63488 00:14:38.593 }, 00:14:38.593 { 00:14:38.593 "name": "BaseBdev3", 00:14:38.593 "uuid": "5362eacf-5fe5-412b-8736-3cc6f6367caa", 00:14:38.593 "is_configured": true, 00:14:38.593 "data_offset": 2048, 00:14:38.593 "data_size": 63488 00:14:38.593 } 00:14:38.593 ] 00:14:38.593 }' 00:14:38.593 03:23:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.593 03:23:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.868 03:23:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:38.868 03:23:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.868 03:23:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.868 03:23:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.868 03:23:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.868 03:23:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:38.868 03:23:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.868 03:23:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:38.868 03:23:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.868 03:23:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.128 03:23:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.128 03:23:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 82150312-d651-46a3-8828-fa0e37d7cc8d 00:14:39.128 03:23:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.128 03:23:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.128 [2024-11-21 03:23:26.482565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:39.128 [2024-11-21 03:23:26.482751] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:39.128 [2024-11-21 03:23:26.482765] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:39.128 [2024-11-21 03:23:26.483078] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:14:39.128 NewBaseBdev 00:14:39.128 [2024-11-21 03:23:26.483519] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:39.128 [2024-11-21 03:23:26.483537] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:39.128 [2024-11-21 03:23:26.483646] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.128 03:23:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.128 03:23:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:39.128 03:23:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:39.128 03:23:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:39.128 03:23:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:39.128 03:23:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:39.128 03:23:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:39.128 03:23:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:39.128 03:23:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.128 03:23:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.128 03:23:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.129 03:23:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:39.129 03:23:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.129 03:23:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.129 [ 00:14:39.129 { 00:14:39.129 "name": "NewBaseBdev", 00:14:39.129 "aliases": [ 00:14:39.129 "82150312-d651-46a3-8828-fa0e37d7cc8d" 00:14:39.129 ], 00:14:39.129 "product_name": "Malloc disk", 00:14:39.129 "block_size": 512, 00:14:39.129 "num_blocks": 65536, 00:14:39.129 "uuid": "82150312-d651-46a3-8828-fa0e37d7cc8d", 00:14:39.129 "assigned_rate_limits": { 00:14:39.129 "rw_ios_per_sec": 0, 00:14:39.129 "rw_mbytes_per_sec": 0, 00:14:39.129 "r_mbytes_per_sec": 0, 00:14:39.129 "w_mbytes_per_sec": 0 00:14:39.129 }, 00:14:39.129 "claimed": true, 00:14:39.129 "claim_type": "exclusive_write", 00:14:39.129 "zoned": false, 00:14:39.129 "supported_io_types": { 00:14:39.129 "read": true, 00:14:39.129 "write": true, 00:14:39.129 "unmap": true, 00:14:39.129 "flush": true, 00:14:39.129 "reset": true, 00:14:39.129 "nvme_admin": false, 00:14:39.129 "nvme_io": false, 00:14:39.129 "nvme_io_md": false, 00:14:39.129 "write_zeroes": true, 00:14:39.129 "zcopy": true, 00:14:39.129 "get_zone_info": false, 00:14:39.129 "zone_management": false, 00:14:39.129 "zone_append": false, 00:14:39.129 "compare": false, 00:14:39.129 "compare_and_write": false, 00:14:39.129 "abort": true, 00:14:39.129 "seek_hole": false, 00:14:39.129 "seek_data": false, 00:14:39.129 "copy": true, 00:14:39.129 "nvme_iov_md": false 00:14:39.129 }, 00:14:39.129 "memory_domains": [ 00:14:39.129 { 00:14:39.129 "dma_device_id": "system", 00:14:39.129 "dma_device_type": 1 00:14:39.129 }, 00:14:39.129 { 00:14:39.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.129 "dma_device_type": 2 00:14:39.129 } 00:14:39.129 ], 00:14:39.129 "driver_specific": {} 00:14:39.129 } 00:14:39.129 ] 00:14:39.129 03:23:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.129 03:23:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:39.129 03:23:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:39.129 03:23:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:39.129 03:23:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.129 03:23:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:39.129 03:23:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.129 03:23:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.129 03:23:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.129 03:23:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.129 03:23:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.129 03:23:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.129 03:23:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.129 03:23:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.129 03:23:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.129 03:23:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.129 03:23:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.129 03:23:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.129 "name": "Existed_Raid", 00:14:39.129 "uuid": "7e6bcd79-524a-4a07-9863-7ff84ae0faf8", 00:14:39.129 "strip_size_kb": 64, 00:14:39.129 "state": "online", 00:14:39.129 "raid_level": "raid5f", 00:14:39.129 "superblock": true, 00:14:39.129 "num_base_bdevs": 3, 00:14:39.129 "num_base_bdevs_discovered": 3, 00:14:39.129 "num_base_bdevs_operational": 3, 00:14:39.129 "base_bdevs_list": [ 00:14:39.129 { 00:14:39.129 "name": "NewBaseBdev", 00:14:39.129 "uuid": "82150312-d651-46a3-8828-fa0e37d7cc8d", 00:14:39.129 "is_configured": true, 00:14:39.129 "data_offset": 2048, 00:14:39.129 "data_size": 63488 00:14:39.129 }, 00:14:39.129 { 00:14:39.129 "name": "BaseBdev2", 00:14:39.129 "uuid": "cf95b869-b52d-4987-bf96-ca9b57d2d1d5", 00:14:39.129 "is_configured": true, 00:14:39.129 "data_offset": 2048, 00:14:39.129 "data_size": 63488 00:14:39.129 }, 00:14:39.129 { 00:14:39.129 "name": "BaseBdev3", 00:14:39.129 "uuid": "5362eacf-5fe5-412b-8736-3cc6f6367caa", 00:14:39.129 "is_configured": true, 00:14:39.129 "data_offset": 2048, 00:14:39.129 "data_size": 63488 00:14:39.129 } 00:14:39.129 ] 00:14:39.129 }' 00:14:39.129 03:23:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.129 03:23:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.699 03:23:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:39.699 03:23:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:39.699 03:23:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:39.699 03:23:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:39.699 03:23:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:39.699 03:23:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:39.699 03:23:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:39.699 03:23:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.699 03:23:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:39.699 03:23:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.699 [2024-11-21 03:23:26.982934] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:39.699 03:23:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.699 03:23:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:39.699 "name": "Existed_Raid", 00:14:39.699 "aliases": [ 00:14:39.699 "7e6bcd79-524a-4a07-9863-7ff84ae0faf8" 00:14:39.699 ], 00:14:39.699 "product_name": "Raid Volume", 00:14:39.699 "block_size": 512, 00:14:39.699 "num_blocks": 126976, 00:14:39.699 "uuid": "7e6bcd79-524a-4a07-9863-7ff84ae0faf8", 00:14:39.699 "assigned_rate_limits": { 00:14:39.699 "rw_ios_per_sec": 0, 00:14:39.699 "rw_mbytes_per_sec": 0, 00:14:39.699 "r_mbytes_per_sec": 0, 00:14:39.699 "w_mbytes_per_sec": 0 00:14:39.699 }, 00:14:39.699 "claimed": false, 00:14:39.699 "zoned": false, 00:14:39.699 "supported_io_types": { 00:14:39.699 "read": true, 00:14:39.699 "write": true, 00:14:39.699 "unmap": false, 00:14:39.699 "flush": false, 00:14:39.699 "reset": true, 00:14:39.699 "nvme_admin": false, 00:14:39.699 "nvme_io": false, 00:14:39.699 "nvme_io_md": false, 00:14:39.699 "write_zeroes": true, 00:14:39.699 "zcopy": false, 00:14:39.699 "get_zone_info": false, 00:14:39.699 "zone_management": false, 00:14:39.699 "zone_append": false, 00:14:39.699 "compare": false, 00:14:39.699 "compare_and_write": false, 00:14:39.699 "abort": false, 00:14:39.699 "seek_hole": false, 00:14:39.699 "seek_data": false, 00:14:39.699 "copy": false, 00:14:39.699 "nvme_iov_md": false 00:14:39.699 }, 00:14:39.699 "driver_specific": { 00:14:39.699 "raid": { 00:14:39.699 "uuid": "7e6bcd79-524a-4a07-9863-7ff84ae0faf8", 00:14:39.699 "strip_size_kb": 64, 00:14:39.699 "state": "online", 00:14:39.699 "raid_level": "raid5f", 00:14:39.699 "superblock": true, 00:14:39.699 "num_base_bdevs": 3, 00:14:39.699 "num_base_bdevs_discovered": 3, 00:14:39.699 "num_base_bdevs_operational": 3, 00:14:39.699 "base_bdevs_list": [ 00:14:39.699 { 00:14:39.699 "name": "NewBaseBdev", 00:14:39.699 "uuid": "82150312-d651-46a3-8828-fa0e37d7cc8d", 00:14:39.699 "is_configured": true, 00:14:39.699 "data_offset": 2048, 00:14:39.699 "data_size": 63488 00:14:39.699 }, 00:14:39.699 { 00:14:39.699 "name": "BaseBdev2", 00:14:39.699 "uuid": "cf95b869-b52d-4987-bf96-ca9b57d2d1d5", 00:14:39.699 "is_configured": true, 00:14:39.699 "data_offset": 2048, 00:14:39.699 "data_size": 63488 00:14:39.699 }, 00:14:39.699 { 00:14:39.699 "name": "BaseBdev3", 00:14:39.699 "uuid": "5362eacf-5fe5-412b-8736-3cc6f6367caa", 00:14:39.699 "is_configured": true, 00:14:39.699 "data_offset": 2048, 00:14:39.699 "data_size": 63488 00:14:39.699 } 00:14:39.699 ] 00:14:39.699 } 00:14:39.699 } 00:14:39.699 }' 00:14:39.699 03:23:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:39.699 03:23:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:39.699 BaseBdev2 00:14:39.699 BaseBdev3' 00:14:39.699 03:23:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.699 03:23:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:39.699 03:23:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.699 03:23:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:39.699 03:23:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.699 03:23:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.699 03:23:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.699 03:23:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.699 03:23:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.699 03:23:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.699 03:23:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.699 03:23:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:39.699 03:23:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.699 03:23:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.699 03:23:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.699 03:23:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.699 03:23:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.699 03:23:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.699 03:23:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.699 03:23:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.699 03:23:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:39.699 03:23:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.699 03:23:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.699 03:23:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.959 03:23:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.959 03:23:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.959 03:23:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:39.959 03:23:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.959 03:23:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.959 [2024-11-21 03:23:27.278818] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:39.959 [2024-11-21 03:23:27.278899] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:39.959 [2024-11-21 03:23:27.279008] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:39.959 [2024-11-21 03:23:27.279342] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:39.959 [2024-11-21 03:23:27.279399] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:39.959 03:23:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.959 03:23:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 93100 00:14:39.959 03:23:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 93100 ']' 00:14:39.959 03:23:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 93100 00:14:39.959 03:23:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:39.959 03:23:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:39.959 03:23:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93100 00:14:39.959 03:23:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:39.959 03:23:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:39.959 killing process with pid 93100 00:14:39.959 03:23:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93100' 00:14:39.959 03:23:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 93100 00:14:39.959 [2024-11-21 03:23:27.332000] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:39.959 03:23:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 93100 00:14:39.959 [2024-11-21 03:23:27.389515] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:40.219 03:23:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:40.219 00:14:40.219 real 0m9.072s 00:14:40.219 user 0m15.171s 00:14:40.219 sys 0m2.003s 00:14:40.219 03:23:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:40.220 03:23:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.220 ************************************ 00:14:40.220 END TEST raid5f_state_function_test_sb 00:14:40.220 ************************************ 00:14:40.480 03:23:27 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:14:40.480 03:23:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:40.480 03:23:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:40.480 03:23:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:40.480 ************************************ 00:14:40.480 START TEST raid5f_superblock_test 00:14:40.480 ************************************ 00:14:40.480 03:23:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:14:40.480 03:23:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:40.480 03:23:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:40.480 03:23:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:40.480 03:23:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:40.480 03:23:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:40.480 03:23:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:40.480 03:23:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:40.480 03:23:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:40.480 03:23:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:40.480 03:23:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:40.480 03:23:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:40.480 03:23:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:40.480 03:23:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:40.480 03:23:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:40.480 03:23:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:40.480 03:23:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:40.480 03:23:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=93704 00:14:40.480 03:23:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:40.480 03:23:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 93704 00:14:40.480 03:23:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 93704 ']' 00:14:40.480 03:23:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.480 03:23:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:40.480 03:23:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.480 03:23:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:40.480 03:23:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.480 [2024-11-21 03:23:27.900972] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:14:40.480 [2024-11-21 03:23:27.901188] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93704 ] 00:14:40.480 [2024-11-21 03:23:28.037979] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:40.740 [2024-11-21 03:23:28.075316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.740 [2024-11-21 03:23:28.117135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.740 [2024-11-21 03:23:28.194225] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.740 [2024-11-21 03:23:28.194262] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.311 malloc1 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.311 [2024-11-21 03:23:28.757312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:41.311 [2024-11-21 03:23:28.757483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.311 [2024-11-21 03:23:28.757535] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:41.311 [2024-11-21 03:23:28.757572] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.311 [2024-11-21 03:23:28.760005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.311 [2024-11-21 03:23:28.760085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:41.311 pt1 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.311 malloc2 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.311 [2024-11-21 03:23:28.795950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:41.311 [2024-11-21 03:23:28.796010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.311 [2024-11-21 03:23:28.796046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:41.311 [2024-11-21 03:23:28.796054] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.311 [2024-11-21 03:23:28.798415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.311 [2024-11-21 03:23:28.798450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:41.311 pt2 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.311 malloc3 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.311 [2024-11-21 03:23:28.830434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:41.311 [2024-11-21 03:23:28.830542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.311 [2024-11-21 03:23:28.830581] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:41.311 [2024-11-21 03:23:28.830618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.311 [2024-11-21 03:23:28.832929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.311 [2024-11-21 03:23:28.832995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:41.311 pt3 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.311 [2024-11-21 03:23:28.842457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:41.311 [2024-11-21 03:23:28.844582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:41.311 [2024-11-21 03:23:28.844683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:41.311 [2024-11-21 03:23:28.844853] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:14:41.311 [2024-11-21 03:23:28.844902] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:41.311 [2024-11-21 03:23:28.845180] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:41.311 [2024-11-21 03:23:28.845649] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:14:41.311 [2024-11-21 03:23:28.845693] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:14:41.311 [2024-11-21 03:23:28.845849] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.311 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.312 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.312 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.312 03:23:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.312 03:23:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.571 03:23:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.571 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.571 "name": "raid_bdev1", 00:14:41.571 "uuid": "9cea5caa-29e5-438b-845c-f548cb5434a5", 00:14:41.571 "strip_size_kb": 64, 00:14:41.571 "state": "online", 00:14:41.571 "raid_level": "raid5f", 00:14:41.571 "superblock": true, 00:14:41.571 "num_base_bdevs": 3, 00:14:41.571 "num_base_bdevs_discovered": 3, 00:14:41.571 "num_base_bdevs_operational": 3, 00:14:41.571 "base_bdevs_list": [ 00:14:41.571 { 00:14:41.571 "name": "pt1", 00:14:41.571 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:41.571 "is_configured": true, 00:14:41.571 "data_offset": 2048, 00:14:41.571 "data_size": 63488 00:14:41.571 }, 00:14:41.571 { 00:14:41.571 "name": "pt2", 00:14:41.571 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:41.571 "is_configured": true, 00:14:41.571 "data_offset": 2048, 00:14:41.571 "data_size": 63488 00:14:41.571 }, 00:14:41.571 { 00:14:41.571 "name": "pt3", 00:14:41.571 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:41.571 "is_configured": true, 00:14:41.571 "data_offset": 2048, 00:14:41.571 "data_size": 63488 00:14:41.571 } 00:14:41.571 ] 00:14:41.571 }' 00:14:41.571 03:23:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.571 03:23:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.844 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:41.844 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:41.844 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:41.844 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:41.844 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:41.844 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:41.845 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:41.845 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:41.845 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.845 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.845 [2024-11-21 03:23:29.320382] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:41.845 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.845 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:41.845 "name": "raid_bdev1", 00:14:41.845 "aliases": [ 00:14:41.845 "9cea5caa-29e5-438b-845c-f548cb5434a5" 00:14:41.845 ], 00:14:41.845 "product_name": "Raid Volume", 00:14:41.845 "block_size": 512, 00:14:41.845 "num_blocks": 126976, 00:14:41.845 "uuid": "9cea5caa-29e5-438b-845c-f548cb5434a5", 00:14:41.845 "assigned_rate_limits": { 00:14:41.845 "rw_ios_per_sec": 0, 00:14:41.845 "rw_mbytes_per_sec": 0, 00:14:41.845 "r_mbytes_per_sec": 0, 00:14:41.845 "w_mbytes_per_sec": 0 00:14:41.845 }, 00:14:41.845 "claimed": false, 00:14:41.845 "zoned": false, 00:14:41.845 "supported_io_types": { 00:14:41.845 "read": true, 00:14:41.845 "write": true, 00:14:41.845 "unmap": false, 00:14:41.845 "flush": false, 00:14:41.845 "reset": true, 00:14:41.845 "nvme_admin": false, 00:14:41.845 "nvme_io": false, 00:14:41.845 "nvme_io_md": false, 00:14:41.845 "write_zeroes": true, 00:14:41.845 "zcopy": false, 00:14:41.845 "get_zone_info": false, 00:14:41.845 "zone_management": false, 00:14:41.845 "zone_append": false, 00:14:41.845 "compare": false, 00:14:41.845 "compare_and_write": false, 00:14:41.845 "abort": false, 00:14:41.845 "seek_hole": false, 00:14:41.845 "seek_data": false, 00:14:41.845 "copy": false, 00:14:41.845 "nvme_iov_md": false 00:14:41.845 }, 00:14:41.845 "driver_specific": { 00:14:41.845 "raid": { 00:14:41.845 "uuid": "9cea5caa-29e5-438b-845c-f548cb5434a5", 00:14:41.845 "strip_size_kb": 64, 00:14:41.845 "state": "online", 00:14:41.845 "raid_level": "raid5f", 00:14:41.845 "superblock": true, 00:14:41.845 "num_base_bdevs": 3, 00:14:41.845 "num_base_bdevs_discovered": 3, 00:14:41.845 "num_base_bdevs_operational": 3, 00:14:41.845 "base_bdevs_list": [ 00:14:41.845 { 00:14:41.845 "name": "pt1", 00:14:41.845 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:41.845 "is_configured": true, 00:14:41.845 "data_offset": 2048, 00:14:41.845 "data_size": 63488 00:14:41.845 }, 00:14:41.845 { 00:14:41.845 "name": "pt2", 00:14:41.845 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:41.845 "is_configured": true, 00:14:41.845 "data_offset": 2048, 00:14:41.845 "data_size": 63488 00:14:41.845 }, 00:14:41.845 { 00:14:41.845 "name": "pt3", 00:14:41.845 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:41.845 "is_configured": true, 00:14:41.845 "data_offset": 2048, 00:14:41.845 "data_size": 63488 00:14:41.845 } 00:14:41.845 ] 00:14:41.845 } 00:14:41.845 } 00:14:41.845 }' 00:14:41.845 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:41.845 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:41.845 pt2 00:14:41.845 pt3' 00:14:41.845 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.108 [2024-11-21 03:23:29.572427] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9cea5caa-29e5-438b-845c-f548cb5434a5 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9cea5caa-29e5-438b-845c-f548cb5434a5 ']' 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.108 [2024-11-21 03:23:29.616238] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:42.108 [2024-11-21 03:23:29.616301] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:42.108 [2024-11-21 03:23:29.616401] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:42.108 [2024-11-21 03:23:29.616496] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:42.108 [2024-11-21 03:23:29.616556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:42.108 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.367 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:42.367 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:42.367 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:42.367 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:42.367 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.367 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.367 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.368 [2024-11-21 03:23:29.772341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:42.368 [2024-11-21 03:23:29.774466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:42.368 [2024-11-21 03:23:29.774517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:42.368 [2024-11-21 03:23:29.774559] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:42.368 [2024-11-21 03:23:29.774621] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:42.368 [2024-11-21 03:23:29.774638] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:42.368 [2024-11-21 03:23:29.774652] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:42.368 [2024-11-21 03:23:29.774667] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:14:42.368 request: 00:14:42.368 { 00:14:42.368 "name": "raid_bdev1", 00:14:42.368 "raid_level": "raid5f", 00:14:42.368 "base_bdevs": [ 00:14:42.368 "malloc1", 00:14:42.368 "malloc2", 00:14:42.368 "malloc3" 00:14:42.368 ], 00:14:42.368 "strip_size_kb": 64, 00:14:42.368 "superblock": false, 00:14:42.368 "method": "bdev_raid_create", 00:14:42.368 "req_id": 1 00:14:42.368 } 00:14:42.368 Got JSON-RPC error response 00:14:42.368 response: 00:14:42.368 { 00:14:42.368 "code": -17, 00:14:42.368 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:42.368 } 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.368 [2024-11-21 03:23:29.840300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:42.368 [2024-11-21 03:23:29.840384] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.368 [2024-11-21 03:23:29.840436] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:42.368 [2024-11-21 03:23:29.840467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.368 [2024-11-21 03:23:29.842942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.368 [2024-11-21 03:23:29.843008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:42.368 [2024-11-21 03:23:29.843133] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:42.368 [2024-11-21 03:23:29.843205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:42.368 pt1 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.368 "name": "raid_bdev1", 00:14:42.368 "uuid": "9cea5caa-29e5-438b-845c-f548cb5434a5", 00:14:42.368 "strip_size_kb": 64, 00:14:42.368 "state": "configuring", 00:14:42.368 "raid_level": "raid5f", 00:14:42.368 "superblock": true, 00:14:42.368 "num_base_bdevs": 3, 00:14:42.368 "num_base_bdevs_discovered": 1, 00:14:42.368 "num_base_bdevs_operational": 3, 00:14:42.368 "base_bdevs_list": [ 00:14:42.368 { 00:14:42.368 "name": "pt1", 00:14:42.368 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:42.368 "is_configured": true, 00:14:42.368 "data_offset": 2048, 00:14:42.368 "data_size": 63488 00:14:42.368 }, 00:14:42.368 { 00:14:42.368 "name": null, 00:14:42.368 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:42.368 "is_configured": false, 00:14:42.368 "data_offset": 2048, 00:14:42.368 "data_size": 63488 00:14:42.368 }, 00:14:42.368 { 00:14:42.368 "name": null, 00:14:42.368 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:42.368 "is_configured": false, 00:14:42.368 "data_offset": 2048, 00:14:42.368 "data_size": 63488 00:14:42.368 } 00:14:42.368 ] 00:14:42.368 }' 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.368 03:23:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.937 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:42.937 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:42.937 03:23:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.937 03:23:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.937 [2024-11-21 03:23:30.332447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:42.937 [2024-11-21 03:23:30.332573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.937 [2024-11-21 03:23:30.332603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:42.937 [2024-11-21 03:23:30.332612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.937 [2024-11-21 03:23:30.333008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.937 [2024-11-21 03:23:30.333024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:42.937 [2024-11-21 03:23:30.333104] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:42.938 [2024-11-21 03:23:30.333124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:42.938 pt2 00:14:42.938 03:23:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.938 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:42.938 03:23:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.938 03:23:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.938 [2024-11-21 03:23:30.344498] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:42.938 03:23:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.938 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:42.938 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.938 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.938 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.938 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.938 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.938 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.938 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.938 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.938 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.938 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.938 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.938 03:23:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.938 03:23:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.938 03:23:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.938 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.938 "name": "raid_bdev1", 00:14:42.938 "uuid": "9cea5caa-29e5-438b-845c-f548cb5434a5", 00:14:42.938 "strip_size_kb": 64, 00:14:42.938 "state": "configuring", 00:14:42.938 "raid_level": "raid5f", 00:14:42.938 "superblock": true, 00:14:42.938 "num_base_bdevs": 3, 00:14:42.938 "num_base_bdevs_discovered": 1, 00:14:42.938 "num_base_bdevs_operational": 3, 00:14:42.938 "base_bdevs_list": [ 00:14:42.938 { 00:14:42.938 "name": "pt1", 00:14:42.938 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:42.938 "is_configured": true, 00:14:42.938 "data_offset": 2048, 00:14:42.938 "data_size": 63488 00:14:42.938 }, 00:14:42.938 { 00:14:42.938 "name": null, 00:14:42.938 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:42.938 "is_configured": false, 00:14:42.938 "data_offset": 0, 00:14:42.938 "data_size": 63488 00:14:42.938 }, 00:14:42.938 { 00:14:42.938 "name": null, 00:14:42.938 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:42.938 "is_configured": false, 00:14:42.938 "data_offset": 2048, 00:14:42.938 "data_size": 63488 00:14:42.938 } 00:14:42.938 ] 00:14:42.938 }' 00:14:42.938 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.938 03:23:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.508 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:43.508 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:43.508 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:43.508 03:23:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.508 03:23:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.508 [2024-11-21 03:23:30.772614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:43.508 [2024-11-21 03:23:30.772757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.508 [2024-11-21 03:23:30.772785] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:43.508 [2024-11-21 03:23:30.772800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.508 [2024-11-21 03:23:30.773234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.508 [2024-11-21 03:23:30.773253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:43.508 [2024-11-21 03:23:30.773326] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:43.508 [2024-11-21 03:23:30.773350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:43.508 pt2 00:14:43.508 03:23:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.508 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:43.508 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:43.508 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:43.508 03:23:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.508 03:23:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.508 [2024-11-21 03:23:30.784566] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:43.508 [2024-11-21 03:23:30.784621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.508 [2024-11-21 03:23:30.784634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:43.508 [2024-11-21 03:23:30.784644] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.508 [2024-11-21 03:23:30.784984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.509 [2024-11-21 03:23:30.785001] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:43.509 [2024-11-21 03:23:30.785073] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:43.509 [2024-11-21 03:23:30.785094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:43.509 [2024-11-21 03:23:30.785195] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:43.509 [2024-11-21 03:23:30.785207] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:43.509 [2024-11-21 03:23:30.785463] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:43.509 [2024-11-21 03:23:30.785908] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:43.509 [2024-11-21 03:23:30.785919] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:43.509 [2024-11-21 03:23:30.786037] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.509 pt3 00:14:43.509 03:23:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.509 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:43.509 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:43.509 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:43.509 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.509 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.509 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.509 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.509 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:43.509 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.509 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.509 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.509 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.509 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.509 03:23:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.509 03:23:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.509 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.509 03:23:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.509 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.509 "name": "raid_bdev1", 00:14:43.509 "uuid": "9cea5caa-29e5-438b-845c-f548cb5434a5", 00:14:43.509 "strip_size_kb": 64, 00:14:43.509 "state": "online", 00:14:43.509 "raid_level": "raid5f", 00:14:43.509 "superblock": true, 00:14:43.509 "num_base_bdevs": 3, 00:14:43.509 "num_base_bdevs_discovered": 3, 00:14:43.509 "num_base_bdevs_operational": 3, 00:14:43.509 "base_bdevs_list": [ 00:14:43.509 { 00:14:43.509 "name": "pt1", 00:14:43.509 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:43.509 "is_configured": true, 00:14:43.509 "data_offset": 2048, 00:14:43.509 "data_size": 63488 00:14:43.509 }, 00:14:43.509 { 00:14:43.509 "name": "pt2", 00:14:43.509 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:43.509 "is_configured": true, 00:14:43.509 "data_offset": 2048, 00:14:43.509 "data_size": 63488 00:14:43.509 }, 00:14:43.509 { 00:14:43.509 "name": "pt3", 00:14:43.509 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:43.509 "is_configured": true, 00:14:43.509 "data_offset": 2048, 00:14:43.509 "data_size": 63488 00:14:43.509 } 00:14:43.509 ] 00:14:43.509 }' 00:14:43.509 03:23:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.509 03:23:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.768 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:43.768 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:43.768 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:43.768 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:43.768 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:43.768 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:43.769 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:43.769 03:23:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.769 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:43.769 03:23:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.769 [2024-11-21 03:23:31.240888] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:43.769 03:23:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.769 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:43.769 "name": "raid_bdev1", 00:14:43.769 "aliases": [ 00:14:43.769 "9cea5caa-29e5-438b-845c-f548cb5434a5" 00:14:43.769 ], 00:14:43.769 "product_name": "Raid Volume", 00:14:43.769 "block_size": 512, 00:14:43.769 "num_blocks": 126976, 00:14:43.769 "uuid": "9cea5caa-29e5-438b-845c-f548cb5434a5", 00:14:43.769 "assigned_rate_limits": { 00:14:43.769 "rw_ios_per_sec": 0, 00:14:43.769 "rw_mbytes_per_sec": 0, 00:14:43.769 "r_mbytes_per_sec": 0, 00:14:43.769 "w_mbytes_per_sec": 0 00:14:43.769 }, 00:14:43.769 "claimed": false, 00:14:43.769 "zoned": false, 00:14:43.769 "supported_io_types": { 00:14:43.769 "read": true, 00:14:43.769 "write": true, 00:14:43.769 "unmap": false, 00:14:43.769 "flush": false, 00:14:43.769 "reset": true, 00:14:43.769 "nvme_admin": false, 00:14:43.769 "nvme_io": false, 00:14:43.769 "nvme_io_md": false, 00:14:43.769 "write_zeroes": true, 00:14:43.769 "zcopy": false, 00:14:43.769 "get_zone_info": false, 00:14:43.769 "zone_management": false, 00:14:43.769 "zone_append": false, 00:14:43.769 "compare": false, 00:14:43.769 "compare_and_write": false, 00:14:43.769 "abort": false, 00:14:43.769 "seek_hole": false, 00:14:43.769 "seek_data": false, 00:14:43.769 "copy": false, 00:14:43.769 "nvme_iov_md": false 00:14:43.769 }, 00:14:43.769 "driver_specific": { 00:14:43.769 "raid": { 00:14:43.769 "uuid": "9cea5caa-29e5-438b-845c-f548cb5434a5", 00:14:43.769 "strip_size_kb": 64, 00:14:43.769 "state": "online", 00:14:43.769 "raid_level": "raid5f", 00:14:43.769 "superblock": true, 00:14:43.769 "num_base_bdevs": 3, 00:14:43.769 "num_base_bdevs_discovered": 3, 00:14:43.769 "num_base_bdevs_operational": 3, 00:14:43.769 "base_bdevs_list": [ 00:14:43.769 { 00:14:43.769 "name": "pt1", 00:14:43.769 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:43.769 "is_configured": true, 00:14:43.769 "data_offset": 2048, 00:14:43.769 "data_size": 63488 00:14:43.769 }, 00:14:43.769 { 00:14:43.769 "name": "pt2", 00:14:43.769 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:43.769 "is_configured": true, 00:14:43.769 "data_offset": 2048, 00:14:43.769 "data_size": 63488 00:14:43.769 }, 00:14:43.769 { 00:14:43.769 "name": "pt3", 00:14:43.769 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:43.769 "is_configured": true, 00:14:43.769 "data_offset": 2048, 00:14:43.769 "data_size": 63488 00:14:43.769 } 00:14:43.769 ] 00:14:43.769 } 00:14:43.769 } 00:14:43.769 }' 00:14:43.769 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:43.769 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:43.769 pt2 00:14:43.769 pt3' 00:14:43.769 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.029 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:44.029 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:44.029 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.029 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:44.029 03:23:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.029 03:23:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.029 03:23:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.029 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:44.029 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:44.029 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:44.029 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:44.029 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.029 03:23:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.029 03:23:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.029 03:23:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.029 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:44.029 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:44.029 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:44.029 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:44.029 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.029 03:23:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.029 03:23:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.029 03:23:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.029 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:44.029 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:44.029 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:44.029 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:44.029 03:23:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.029 03:23:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.029 [2024-11-21 03:23:31.536919] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:44.029 03:23:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.029 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9cea5caa-29e5-438b-845c-f548cb5434a5 '!=' 9cea5caa-29e5-438b-845c-f548cb5434a5 ']' 00:14:44.029 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:44.029 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:44.030 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:44.030 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:44.030 03:23:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.030 03:23:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.030 [2024-11-21 03:23:31.584808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:44.030 03:23:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.030 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:44.030 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.030 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.030 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.030 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.030 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:44.290 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.290 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.290 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.290 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.290 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.290 03:23:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.290 03:23:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.290 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.290 03:23:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.290 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.290 "name": "raid_bdev1", 00:14:44.290 "uuid": "9cea5caa-29e5-438b-845c-f548cb5434a5", 00:14:44.290 "strip_size_kb": 64, 00:14:44.290 "state": "online", 00:14:44.290 "raid_level": "raid5f", 00:14:44.290 "superblock": true, 00:14:44.290 "num_base_bdevs": 3, 00:14:44.290 "num_base_bdevs_discovered": 2, 00:14:44.290 "num_base_bdevs_operational": 2, 00:14:44.290 "base_bdevs_list": [ 00:14:44.290 { 00:14:44.290 "name": null, 00:14:44.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.290 "is_configured": false, 00:14:44.290 "data_offset": 0, 00:14:44.290 "data_size": 63488 00:14:44.290 }, 00:14:44.291 { 00:14:44.291 "name": "pt2", 00:14:44.291 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:44.291 "is_configured": true, 00:14:44.291 "data_offset": 2048, 00:14:44.291 "data_size": 63488 00:14:44.291 }, 00:14:44.291 { 00:14:44.291 "name": "pt3", 00:14:44.291 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:44.291 "is_configured": true, 00:14:44.291 "data_offset": 2048, 00:14:44.291 "data_size": 63488 00:14:44.291 } 00:14:44.291 ] 00:14:44.291 }' 00:14:44.291 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.291 03:23:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.551 03:23:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:44.551 03:23:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.551 03:23:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.551 [2024-11-21 03:23:31.996858] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:44.551 [2024-11-21 03:23:31.996935] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:44.551 [2024-11-21 03:23:31.996998] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:44.551 [2024-11-21 03:23:31.997060] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:44.551 [2024-11-21 03:23:31.997073] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:44.551 03:23:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.551 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.551 03:23:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.551 03:23:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.551 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:44.551 03:23:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.551 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:44.551 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:44.551 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:44.551 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:44.551 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:44.551 03:23:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.551 03:23:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.551 03:23:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.551 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:44.551 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:44.552 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:44.552 03:23:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.552 03:23:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.552 03:23:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.552 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:44.552 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:44.552 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:44.552 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:44.552 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:44.552 03:23:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.552 03:23:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.552 [2024-11-21 03:23:32.080893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:44.552 [2024-11-21 03:23:32.080946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.552 [2024-11-21 03:23:32.080963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:44.552 [2024-11-21 03:23:32.080975] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.552 [2024-11-21 03:23:32.083416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.552 [2024-11-21 03:23:32.083454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:44.552 [2024-11-21 03:23:32.083518] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:44.552 [2024-11-21 03:23:32.083555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:44.552 pt2 00:14:44.552 03:23:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.552 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:44.552 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.552 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.552 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.552 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.552 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:44.552 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.552 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.552 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.552 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.552 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.552 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.552 03:23:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.552 03:23:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.552 03:23:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.812 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.812 "name": "raid_bdev1", 00:14:44.813 "uuid": "9cea5caa-29e5-438b-845c-f548cb5434a5", 00:14:44.813 "strip_size_kb": 64, 00:14:44.813 "state": "configuring", 00:14:44.813 "raid_level": "raid5f", 00:14:44.813 "superblock": true, 00:14:44.813 "num_base_bdevs": 3, 00:14:44.813 "num_base_bdevs_discovered": 1, 00:14:44.813 "num_base_bdevs_operational": 2, 00:14:44.813 "base_bdevs_list": [ 00:14:44.813 { 00:14:44.813 "name": null, 00:14:44.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.813 "is_configured": false, 00:14:44.813 "data_offset": 2048, 00:14:44.813 "data_size": 63488 00:14:44.813 }, 00:14:44.813 { 00:14:44.813 "name": "pt2", 00:14:44.813 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:44.813 "is_configured": true, 00:14:44.813 "data_offset": 2048, 00:14:44.813 "data_size": 63488 00:14:44.813 }, 00:14:44.813 { 00:14:44.813 "name": null, 00:14:44.813 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:44.813 "is_configured": false, 00:14:44.813 "data_offset": 2048, 00:14:44.813 "data_size": 63488 00:14:44.813 } 00:14:44.813 ] 00:14:44.813 }' 00:14:44.813 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.813 03:23:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.073 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:45.073 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:45.073 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:14:45.073 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:45.073 03:23:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.073 03:23:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.073 [2024-11-21 03:23:32.509063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:45.073 [2024-11-21 03:23:32.509182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.073 [2024-11-21 03:23:32.509220] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:45.073 [2024-11-21 03:23:32.509253] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.073 [2024-11-21 03:23:32.509693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.073 [2024-11-21 03:23:32.509754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:45.073 [2024-11-21 03:23:32.509847] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:45.073 [2024-11-21 03:23:32.509897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:45.073 [2024-11-21 03:23:32.510009] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:45.073 [2024-11-21 03:23:32.510064] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:45.073 [2024-11-21 03:23:32.510313] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:45.073 [2024-11-21 03:23:32.510792] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:45.073 [2024-11-21 03:23:32.510836] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:45.073 [2024-11-21 03:23:32.511141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.073 pt3 00:14:45.073 03:23:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.073 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:45.073 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.073 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.073 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.073 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.073 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:45.073 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.073 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.073 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.073 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.073 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.073 03:23:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.073 03:23:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.073 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.073 03:23:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.073 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.073 "name": "raid_bdev1", 00:14:45.073 "uuid": "9cea5caa-29e5-438b-845c-f548cb5434a5", 00:14:45.073 "strip_size_kb": 64, 00:14:45.073 "state": "online", 00:14:45.073 "raid_level": "raid5f", 00:14:45.073 "superblock": true, 00:14:45.073 "num_base_bdevs": 3, 00:14:45.073 "num_base_bdevs_discovered": 2, 00:14:45.073 "num_base_bdevs_operational": 2, 00:14:45.073 "base_bdevs_list": [ 00:14:45.073 { 00:14:45.073 "name": null, 00:14:45.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.073 "is_configured": false, 00:14:45.073 "data_offset": 2048, 00:14:45.073 "data_size": 63488 00:14:45.073 }, 00:14:45.073 { 00:14:45.073 "name": "pt2", 00:14:45.073 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:45.073 "is_configured": true, 00:14:45.073 "data_offset": 2048, 00:14:45.073 "data_size": 63488 00:14:45.073 }, 00:14:45.073 { 00:14:45.073 "name": "pt3", 00:14:45.073 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:45.073 "is_configured": true, 00:14:45.073 "data_offset": 2048, 00:14:45.073 "data_size": 63488 00:14:45.073 } 00:14:45.073 ] 00:14:45.073 }' 00:14:45.073 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.073 03:23:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.644 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:45.644 03:23:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.644 03:23:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.644 [2024-11-21 03:23:32.985302] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:45.644 [2024-11-21 03:23:32.985328] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:45.644 [2024-11-21 03:23:32.985384] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:45.644 [2024-11-21 03:23:32.985437] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:45.644 [2024-11-21 03:23:32.985446] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:45.644 03:23:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.644 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:45.644 03:23:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.644 03:23:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.644 03:23:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.644 03:23:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.644 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:45.644 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:45.644 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:14:45.644 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:14:45.644 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:14:45.644 03:23:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.644 03:23:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.644 03:23:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.644 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:45.644 03:23:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.644 03:23:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.644 [2024-11-21 03:23:33.061303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:45.644 [2024-11-21 03:23:33.061350] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.644 [2024-11-21 03:23:33.061367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:45.644 [2024-11-21 03:23:33.061375] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.644 [2024-11-21 03:23:33.063804] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.644 [2024-11-21 03:23:33.063838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:45.644 [2024-11-21 03:23:33.063911] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:45.644 [2024-11-21 03:23:33.063951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:45.644 [2024-11-21 03:23:33.064090] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:45.644 [2024-11-21 03:23:33.064102] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:45.644 [2024-11-21 03:23:33.064126] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:14:45.644 [2024-11-21 03:23:33.064166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:45.644 pt1 00:14:45.644 03:23:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.644 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:14:45.644 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:45.644 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.644 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.644 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.644 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.644 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:45.644 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.644 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.644 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.644 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.644 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.644 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.644 03:23:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.644 03:23:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.644 03:23:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.644 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.644 "name": "raid_bdev1", 00:14:45.644 "uuid": "9cea5caa-29e5-438b-845c-f548cb5434a5", 00:14:45.644 "strip_size_kb": 64, 00:14:45.644 "state": "configuring", 00:14:45.644 "raid_level": "raid5f", 00:14:45.644 "superblock": true, 00:14:45.644 "num_base_bdevs": 3, 00:14:45.644 "num_base_bdevs_discovered": 1, 00:14:45.644 "num_base_bdevs_operational": 2, 00:14:45.644 "base_bdevs_list": [ 00:14:45.644 { 00:14:45.644 "name": null, 00:14:45.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.644 "is_configured": false, 00:14:45.644 "data_offset": 2048, 00:14:45.644 "data_size": 63488 00:14:45.644 }, 00:14:45.644 { 00:14:45.644 "name": "pt2", 00:14:45.644 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:45.644 "is_configured": true, 00:14:45.644 "data_offset": 2048, 00:14:45.644 "data_size": 63488 00:14:45.644 }, 00:14:45.644 { 00:14:45.644 "name": null, 00:14:45.644 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:45.644 "is_configured": false, 00:14:45.644 "data_offset": 2048, 00:14:45.644 "data_size": 63488 00:14:45.644 } 00:14:45.644 ] 00:14:45.644 }' 00:14:45.644 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.644 03:23:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.213 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:46.213 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:46.213 03:23:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.214 03:23:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.214 03:23:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.214 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:46.214 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:46.214 03:23:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.214 03:23:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.214 [2024-11-21 03:23:33.573447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:46.214 [2024-11-21 03:23:33.573533] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:46.214 [2024-11-21 03:23:33.573581] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:46.214 [2024-11-21 03:23:33.573608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:46.214 [2024-11-21 03:23:33.573993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:46.214 [2024-11-21 03:23:33.574053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:46.214 [2024-11-21 03:23:33.574133] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:46.214 [2024-11-21 03:23:33.574176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:46.214 [2024-11-21 03:23:33.574271] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:46.214 [2024-11-21 03:23:33.574305] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:46.214 [2024-11-21 03:23:33.574563] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:14:46.214 [2024-11-21 03:23:33.575087] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:46.214 [2024-11-21 03:23:33.575138] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:46.214 [2024-11-21 03:23:33.575329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.214 pt3 00:14:46.214 03:23:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.214 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:46.214 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.214 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.214 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.214 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.214 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:46.214 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.214 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.214 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.214 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.214 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.214 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.214 03:23:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.214 03:23:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.214 03:23:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.214 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.214 "name": "raid_bdev1", 00:14:46.214 "uuid": "9cea5caa-29e5-438b-845c-f548cb5434a5", 00:14:46.214 "strip_size_kb": 64, 00:14:46.214 "state": "online", 00:14:46.214 "raid_level": "raid5f", 00:14:46.214 "superblock": true, 00:14:46.214 "num_base_bdevs": 3, 00:14:46.214 "num_base_bdevs_discovered": 2, 00:14:46.214 "num_base_bdevs_operational": 2, 00:14:46.214 "base_bdevs_list": [ 00:14:46.214 { 00:14:46.214 "name": null, 00:14:46.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.214 "is_configured": false, 00:14:46.214 "data_offset": 2048, 00:14:46.214 "data_size": 63488 00:14:46.214 }, 00:14:46.214 { 00:14:46.214 "name": "pt2", 00:14:46.214 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:46.214 "is_configured": true, 00:14:46.214 "data_offset": 2048, 00:14:46.214 "data_size": 63488 00:14:46.214 }, 00:14:46.214 { 00:14:46.214 "name": "pt3", 00:14:46.214 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:46.214 "is_configured": true, 00:14:46.214 "data_offset": 2048, 00:14:46.214 "data_size": 63488 00:14:46.214 } 00:14:46.214 ] 00:14:46.214 }' 00:14:46.214 03:23:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.214 03:23:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.784 03:23:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:46.784 03:23:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:46.784 03:23:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.784 03:23:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.784 03:23:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.784 03:23:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:46.784 03:23:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:46.784 03:23:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:46.784 03:23:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.784 03:23:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.784 [2024-11-21 03:23:34.113736] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:46.784 03:23:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.784 03:23:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 9cea5caa-29e5-438b-845c-f548cb5434a5 '!=' 9cea5caa-29e5-438b-845c-f548cb5434a5 ']' 00:14:46.784 03:23:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 93704 00:14:46.784 03:23:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 93704 ']' 00:14:46.784 03:23:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 93704 00:14:46.784 03:23:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:46.784 03:23:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:46.784 03:23:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93704 00:14:46.784 killing process with pid 93704 00:14:46.784 03:23:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:46.784 03:23:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:46.784 03:23:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93704' 00:14:46.784 03:23:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 93704 00:14:46.784 [2024-11-21 03:23:34.179769] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:46.784 [2024-11-21 03:23:34.179836] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:46.784 [2024-11-21 03:23:34.179885] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:46.784 [2024-11-21 03:23:34.179897] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:46.784 03:23:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 93704 00:14:46.784 [2024-11-21 03:23:34.239509] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:47.044 ************************************ 00:14:47.044 END TEST raid5f_superblock_test 00:14:47.044 ************************************ 00:14:47.044 03:23:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:47.044 00:14:47.044 real 0m6.765s 00:14:47.044 user 0m11.144s 00:14:47.044 sys 0m1.526s 00:14:47.044 03:23:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:47.044 03:23:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.304 03:23:34 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:47.304 03:23:34 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:14:47.304 03:23:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:47.304 03:23:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:47.304 03:23:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:47.304 ************************************ 00:14:47.304 START TEST raid5f_rebuild_test 00:14:47.304 ************************************ 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=94137 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 94137 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 94137 ']' 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:47.304 03:23:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.304 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:47.304 Zero copy mechanism will not be used. 00:14:47.304 [2024-11-21 03:23:34.756084] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:14:47.304 [2024-11-21 03:23:34.756190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94137 ] 00:14:47.564 [2024-11-21 03:23:34.891622] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:47.564 [2024-11-21 03:23:34.930427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.564 [2024-11-21 03:23:34.971183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.564 [2024-11-21 03:23:35.047067] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:47.564 [2024-11-21 03:23:35.047199] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:48.134 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:48.134 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:48.134 03:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:48.134 03:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:48.134 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.134 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.134 BaseBdev1_malloc 00:14:48.134 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.134 03:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:48.134 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.134 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.134 [2024-11-21 03:23:35.601932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:48.134 [2024-11-21 03:23:35.602012] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.134 [2024-11-21 03:23:35.602081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:48.134 [2024-11-21 03:23:35.602105] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.134 [2024-11-21 03:23:35.604455] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.134 [2024-11-21 03:23:35.604493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:48.134 BaseBdev1 00:14:48.134 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.134 03:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:48.134 03:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:48.134 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.134 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.134 BaseBdev2_malloc 00:14:48.134 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.134 03:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:48.134 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.134 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.134 [2024-11-21 03:23:35.636254] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:48.134 [2024-11-21 03:23:35.636311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.134 [2024-11-21 03:23:35.636331] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:48.134 [2024-11-21 03:23:35.636341] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.134 [2024-11-21 03:23:35.638619] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.134 [2024-11-21 03:23:35.638657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:48.134 BaseBdev2 00:14:48.134 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.134 03:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:48.134 03:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:48.134 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.134 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.134 BaseBdev3_malloc 00:14:48.134 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.134 03:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:48.134 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.134 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.134 [2024-11-21 03:23:35.670531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:48.134 [2024-11-21 03:23:35.670646] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.134 [2024-11-21 03:23:35.670672] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:48.134 [2024-11-21 03:23:35.670683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.134 [2024-11-21 03:23:35.672974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.134 [2024-11-21 03:23:35.673011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:48.134 BaseBdev3 00:14:48.134 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.134 03:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:48.134 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.134 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.394 spare_malloc 00:14:48.394 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.394 03:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:48.394 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.394 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.394 spare_delay 00:14:48.394 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.394 03:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:48.394 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.394 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.394 [2024-11-21 03:23:35.727281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:48.394 [2024-11-21 03:23:35.727386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.394 [2024-11-21 03:23:35.727423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:48.394 [2024-11-21 03:23:35.727434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.394 [2024-11-21 03:23:35.729853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.394 [2024-11-21 03:23:35.729889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:48.394 spare 00:14:48.394 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.394 03:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:48.394 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.394 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.394 [2024-11-21 03:23:35.739343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:48.394 [2024-11-21 03:23:35.741356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:48.394 [2024-11-21 03:23:35.741408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:48.394 [2024-11-21 03:23:35.741482] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:14:48.394 [2024-11-21 03:23:35.741491] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:48.394 [2024-11-21 03:23:35.741738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:48.394 [2024-11-21 03:23:35.742156] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:14:48.394 [2024-11-21 03:23:35.742170] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:14:48.394 [2024-11-21 03:23:35.742289] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.394 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.394 03:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:48.394 03:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:48.394 03:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.394 03:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.394 03:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.395 03:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:48.395 03:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.395 03:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.395 03:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.395 03:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.395 03:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.395 03:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.395 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.395 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.395 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.395 03:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.395 "name": "raid_bdev1", 00:14:48.395 "uuid": "f67f3bb9-8244-401a-a97f-a982b3d95f16", 00:14:48.395 "strip_size_kb": 64, 00:14:48.395 "state": "online", 00:14:48.395 "raid_level": "raid5f", 00:14:48.395 "superblock": false, 00:14:48.395 "num_base_bdevs": 3, 00:14:48.395 "num_base_bdevs_discovered": 3, 00:14:48.395 "num_base_bdevs_operational": 3, 00:14:48.395 "base_bdevs_list": [ 00:14:48.395 { 00:14:48.395 "name": "BaseBdev1", 00:14:48.395 "uuid": "0a424d72-3967-5b38-a6bc-456d1971571a", 00:14:48.395 "is_configured": true, 00:14:48.395 "data_offset": 0, 00:14:48.395 "data_size": 65536 00:14:48.395 }, 00:14:48.395 { 00:14:48.395 "name": "BaseBdev2", 00:14:48.395 "uuid": "abf48985-f6d6-5ecc-8323-f4973285d464", 00:14:48.395 "is_configured": true, 00:14:48.395 "data_offset": 0, 00:14:48.395 "data_size": 65536 00:14:48.395 }, 00:14:48.395 { 00:14:48.395 "name": "BaseBdev3", 00:14:48.395 "uuid": "b2c4ed90-9712-5e08-97b7-399286292739", 00:14:48.395 "is_configured": true, 00:14:48.395 "data_offset": 0, 00:14:48.395 "data_size": 65536 00:14:48.395 } 00:14:48.395 ] 00:14:48.395 }' 00:14:48.395 03:23:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.395 03:23:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.655 03:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:48.655 03:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:48.655 03:23:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.655 03:23:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.655 [2024-11-21 03:23:36.192735] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:48.655 03:23:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.655 03:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:14:48.914 03:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.914 03:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:48.914 03:23:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.914 03:23:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.914 03:23:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.914 03:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:48.914 03:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:48.914 03:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:48.914 03:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:48.914 03:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:48.914 03:23:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:48.914 03:23:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:48.914 03:23:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:48.914 03:23:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:48.914 03:23:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:48.914 03:23:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:48.914 03:23:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:48.914 03:23:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:48.914 03:23:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:48.914 [2024-11-21 03:23:36.452711] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:14:48.914 /dev/nbd0 00:14:49.195 03:23:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:49.195 03:23:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:49.195 03:23:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:49.195 03:23:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:49.195 03:23:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:49.195 03:23:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:49.195 03:23:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:49.195 03:23:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:49.195 03:23:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:49.195 03:23:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:49.195 03:23:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:49.195 1+0 records in 00:14:49.195 1+0 records out 00:14:49.195 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418019 s, 9.8 MB/s 00:14:49.195 03:23:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:49.195 03:23:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:49.195 03:23:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:49.195 03:23:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:49.195 03:23:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:49.195 03:23:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:49.195 03:23:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:49.195 03:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:49.195 03:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:49.195 03:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:49.195 03:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:14:49.466 512+0 records in 00:14:49.466 512+0 records out 00:14:49.466 67108864 bytes (67 MB, 64 MiB) copied, 0.294023 s, 228 MB/s 00:14:49.466 03:23:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:49.466 03:23:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:49.466 03:23:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:49.466 03:23:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:49.466 03:23:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:49.466 03:23:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:49.466 03:23:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:49.726 03:23:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:49.726 03:23:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:49.726 03:23:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:49.726 03:23:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:49.726 03:23:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:49.726 03:23:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:49.726 [2024-11-21 03:23:37.037904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.726 03:23:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:49.726 03:23:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:49.726 03:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:49.726 03:23:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.726 03:23:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.726 [2024-11-21 03:23:37.048886] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:49.726 03:23:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.726 03:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:49.726 03:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.726 03:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.726 03:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.726 03:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.726 03:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:49.726 03:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.726 03:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.726 03:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.726 03:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.726 03:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.726 03:23:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.726 03:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.726 03:23:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.726 03:23:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.726 03:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.726 "name": "raid_bdev1", 00:14:49.726 "uuid": "f67f3bb9-8244-401a-a97f-a982b3d95f16", 00:14:49.726 "strip_size_kb": 64, 00:14:49.726 "state": "online", 00:14:49.726 "raid_level": "raid5f", 00:14:49.726 "superblock": false, 00:14:49.726 "num_base_bdevs": 3, 00:14:49.726 "num_base_bdevs_discovered": 2, 00:14:49.726 "num_base_bdevs_operational": 2, 00:14:49.726 "base_bdevs_list": [ 00:14:49.726 { 00:14:49.726 "name": null, 00:14:49.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.726 "is_configured": false, 00:14:49.726 "data_offset": 0, 00:14:49.726 "data_size": 65536 00:14:49.726 }, 00:14:49.726 { 00:14:49.726 "name": "BaseBdev2", 00:14:49.726 "uuid": "abf48985-f6d6-5ecc-8323-f4973285d464", 00:14:49.726 "is_configured": true, 00:14:49.726 "data_offset": 0, 00:14:49.726 "data_size": 65536 00:14:49.726 }, 00:14:49.726 { 00:14:49.726 "name": "BaseBdev3", 00:14:49.726 "uuid": "b2c4ed90-9712-5e08-97b7-399286292739", 00:14:49.726 "is_configured": true, 00:14:49.726 "data_offset": 0, 00:14:49.726 "data_size": 65536 00:14:49.726 } 00:14:49.726 ] 00:14:49.726 }' 00:14:49.726 03:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.726 03:23:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.986 03:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:49.986 03:23:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.986 03:23:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.986 [2024-11-21 03:23:37.505015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:49.986 [2024-11-21 03:23:37.509959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ba90 00:14:49.987 03:23:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.987 03:23:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:49.987 [2024-11-21 03:23:37.512222] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:51.364 03:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:51.364 03:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.364 03:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:51.364 03:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:51.364 03:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.364 03:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.364 03:23:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.364 03:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.364 03:23:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.364 03:23:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.364 03:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.364 "name": "raid_bdev1", 00:14:51.364 "uuid": "f67f3bb9-8244-401a-a97f-a982b3d95f16", 00:14:51.364 "strip_size_kb": 64, 00:14:51.364 "state": "online", 00:14:51.364 "raid_level": "raid5f", 00:14:51.364 "superblock": false, 00:14:51.364 "num_base_bdevs": 3, 00:14:51.364 "num_base_bdevs_discovered": 3, 00:14:51.364 "num_base_bdevs_operational": 3, 00:14:51.364 "process": { 00:14:51.364 "type": "rebuild", 00:14:51.364 "target": "spare", 00:14:51.364 "progress": { 00:14:51.364 "blocks": 20480, 00:14:51.364 "percent": 15 00:14:51.364 } 00:14:51.364 }, 00:14:51.364 "base_bdevs_list": [ 00:14:51.364 { 00:14:51.364 "name": "spare", 00:14:51.364 "uuid": "3dfb62b9-dd31-5051-bea0-f7daf4390e14", 00:14:51.364 "is_configured": true, 00:14:51.364 "data_offset": 0, 00:14:51.364 "data_size": 65536 00:14:51.364 }, 00:14:51.364 { 00:14:51.364 "name": "BaseBdev2", 00:14:51.364 "uuid": "abf48985-f6d6-5ecc-8323-f4973285d464", 00:14:51.364 "is_configured": true, 00:14:51.364 "data_offset": 0, 00:14:51.364 "data_size": 65536 00:14:51.364 }, 00:14:51.364 { 00:14:51.364 "name": "BaseBdev3", 00:14:51.364 "uuid": "b2c4ed90-9712-5e08-97b7-399286292739", 00:14:51.364 "is_configured": true, 00:14:51.364 "data_offset": 0, 00:14:51.364 "data_size": 65536 00:14:51.364 } 00:14:51.364 ] 00:14:51.364 }' 00:14:51.364 03:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.364 03:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:51.364 03:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.364 03:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:51.364 03:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:51.364 03:23:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.364 03:23:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.364 [2024-11-21 03:23:38.666284] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:51.364 [2024-11-21 03:23:38.723431] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:51.364 [2024-11-21 03:23:38.723535] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.364 [2024-11-21 03:23:38.723561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:51.364 [2024-11-21 03:23:38.723574] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:51.364 03:23:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.364 03:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:51.365 03:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.365 03:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.365 03:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.365 03:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.365 03:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:51.365 03:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.365 03:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.365 03:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.365 03:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.365 03:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.365 03:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.365 03:23:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.365 03:23:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.365 03:23:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.365 03:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.365 "name": "raid_bdev1", 00:14:51.365 "uuid": "f67f3bb9-8244-401a-a97f-a982b3d95f16", 00:14:51.365 "strip_size_kb": 64, 00:14:51.365 "state": "online", 00:14:51.365 "raid_level": "raid5f", 00:14:51.365 "superblock": false, 00:14:51.365 "num_base_bdevs": 3, 00:14:51.365 "num_base_bdevs_discovered": 2, 00:14:51.365 "num_base_bdevs_operational": 2, 00:14:51.365 "base_bdevs_list": [ 00:14:51.365 { 00:14:51.365 "name": null, 00:14:51.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.365 "is_configured": false, 00:14:51.365 "data_offset": 0, 00:14:51.365 "data_size": 65536 00:14:51.365 }, 00:14:51.365 { 00:14:51.365 "name": "BaseBdev2", 00:14:51.365 "uuid": "abf48985-f6d6-5ecc-8323-f4973285d464", 00:14:51.365 "is_configured": true, 00:14:51.365 "data_offset": 0, 00:14:51.365 "data_size": 65536 00:14:51.365 }, 00:14:51.365 { 00:14:51.365 "name": "BaseBdev3", 00:14:51.365 "uuid": "b2c4ed90-9712-5e08-97b7-399286292739", 00:14:51.365 "is_configured": true, 00:14:51.365 "data_offset": 0, 00:14:51.365 "data_size": 65536 00:14:51.365 } 00:14:51.365 ] 00:14:51.365 }' 00:14:51.365 03:23:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.365 03:23:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.932 03:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:51.932 03:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.932 03:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:51.932 03:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:51.932 03:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.932 03:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.932 03:23:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.932 03:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.932 03:23:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.932 03:23:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.932 03:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.932 "name": "raid_bdev1", 00:14:51.932 "uuid": "f67f3bb9-8244-401a-a97f-a982b3d95f16", 00:14:51.932 "strip_size_kb": 64, 00:14:51.932 "state": "online", 00:14:51.932 "raid_level": "raid5f", 00:14:51.932 "superblock": false, 00:14:51.932 "num_base_bdevs": 3, 00:14:51.932 "num_base_bdevs_discovered": 2, 00:14:51.932 "num_base_bdevs_operational": 2, 00:14:51.932 "base_bdevs_list": [ 00:14:51.932 { 00:14:51.932 "name": null, 00:14:51.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.932 "is_configured": false, 00:14:51.932 "data_offset": 0, 00:14:51.932 "data_size": 65536 00:14:51.932 }, 00:14:51.932 { 00:14:51.932 "name": "BaseBdev2", 00:14:51.932 "uuid": "abf48985-f6d6-5ecc-8323-f4973285d464", 00:14:51.932 "is_configured": true, 00:14:51.932 "data_offset": 0, 00:14:51.932 "data_size": 65536 00:14:51.932 }, 00:14:51.932 { 00:14:51.932 "name": "BaseBdev3", 00:14:51.932 "uuid": "b2c4ed90-9712-5e08-97b7-399286292739", 00:14:51.932 "is_configured": true, 00:14:51.932 "data_offset": 0, 00:14:51.932 "data_size": 65536 00:14:51.932 } 00:14:51.932 ] 00:14:51.932 }' 00:14:51.932 03:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.932 03:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:51.932 03:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.932 03:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:51.932 03:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:51.932 03:23:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.932 03:23:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.932 [2024-11-21 03:23:39.386738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:51.932 [2024-11-21 03:23:39.391823] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002bb60 00:14:51.932 03:23:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.932 03:23:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:51.932 [2024-11-21 03:23:39.394560] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:52.868 03:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:52.868 03:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.868 03:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:52.868 03:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:52.868 03:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.868 03:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.868 03:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.868 03:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.868 03:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.868 03:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.127 03:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.127 "name": "raid_bdev1", 00:14:53.127 "uuid": "f67f3bb9-8244-401a-a97f-a982b3d95f16", 00:14:53.127 "strip_size_kb": 64, 00:14:53.127 "state": "online", 00:14:53.127 "raid_level": "raid5f", 00:14:53.127 "superblock": false, 00:14:53.127 "num_base_bdevs": 3, 00:14:53.127 "num_base_bdevs_discovered": 3, 00:14:53.127 "num_base_bdevs_operational": 3, 00:14:53.127 "process": { 00:14:53.127 "type": "rebuild", 00:14:53.127 "target": "spare", 00:14:53.127 "progress": { 00:14:53.127 "blocks": 18432, 00:14:53.127 "percent": 14 00:14:53.127 } 00:14:53.127 }, 00:14:53.127 "base_bdevs_list": [ 00:14:53.127 { 00:14:53.127 "name": "spare", 00:14:53.127 "uuid": "3dfb62b9-dd31-5051-bea0-f7daf4390e14", 00:14:53.127 "is_configured": true, 00:14:53.127 "data_offset": 0, 00:14:53.127 "data_size": 65536 00:14:53.127 }, 00:14:53.127 { 00:14:53.127 "name": "BaseBdev2", 00:14:53.127 "uuid": "abf48985-f6d6-5ecc-8323-f4973285d464", 00:14:53.127 "is_configured": true, 00:14:53.127 "data_offset": 0, 00:14:53.127 "data_size": 65536 00:14:53.127 }, 00:14:53.127 { 00:14:53.127 "name": "BaseBdev3", 00:14:53.127 "uuid": "b2c4ed90-9712-5e08-97b7-399286292739", 00:14:53.127 "is_configured": true, 00:14:53.127 "data_offset": 0, 00:14:53.127 "data_size": 65536 00:14:53.127 } 00:14:53.127 ] 00:14:53.127 }' 00:14:53.127 03:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.127 03:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.127 03:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.127 03:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.127 03:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:53.127 03:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:53.127 03:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:53.127 03:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=455 00:14:53.127 03:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:53.127 03:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.127 03:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.127 03:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.127 03:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.127 03:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.127 03:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.127 03:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.127 03:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.127 03:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.127 03:23:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.127 03:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.127 "name": "raid_bdev1", 00:14:53.127 "uuid": "f67f3bb9-8244-401a-a97f-a982b3d95f16", 00:14:53.127 "strip_size_kb": 64, 00:14:53.127 "state": "online", 00:14:53.127 "raid_level": "raid5f", 00:14:53.127 "superblock": false, 00:14:53.127 "num_base_bdevs": 3, 00:14:53.127 "num_base_bdevs_discovered": 3, 00:14:53.127 "num_base_bdevs_operational": 3, 00:14:53.127 "process": { 00:14:53.127 "type": "rebuild", 00:14:53.127 "target": "spare", 00:14:53.127 "progress": { 00:14:53.127 "blocks": 22528, 00:14:53.127 "percent": 17 00:14:53.127 } 00:14:53.127 }, 00:14:53.127 "base_bdevs_list": [ 00:14:53.127 { 00:14:53.127 "name": "spare", 00:14:53.127 "uuid": "3dfb62b9-dd31-5051-bea0-f7daf4390e14", 00:14:53.127 "is_configured": true, 00:14:53.127 "data_offset": 0, 00:14:53.127 "data_size": 65536 00:14:53.127 }, 00:14:53.127 { 00:14:53.127 "name": "BaseBdev2", 00:14:53.127 "uuid": "abf48985-f6d6-5ecc-8323-f4973285d464", 00:14:53.127 "is_configured": true, 00:14:53.127 "data_offset": 0, 00:14:53.127 "data_size": 65536 00:14:53.127 }, 00:14:53.127 { 00:14:53.127 "name": "BaseBdev3", 00:14:53.127 "uuid": "b2c4ed90-9712-5e08-97b7-399286292739", 00:14:53.127 "is_configured": true, 00:14:53.127 "data_offset": 0, 00:14:53.127 "data_size": 65536 00:14:53.127 } 00:14:53.127 ] 00:14:53.127 }' 00:14:53.127 03:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.127 03:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.127 03:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.386 03:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.386 03:23:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:54.323 03:23:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:54.323 03:23:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:54.323 03:23:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.323 03:23:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:54.323 03:23:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:54.323 03:23:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.323 03:23:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.323 03:23:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.323 03:23:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.323 03:23:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.323 03:23:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.323 03:23:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.323 "name": "raid_bdev1", 00:14:54.323 "uuid": "f67f3bb9-8244-401a-a97f-a982b3d95f16", 00:14:54.323 "strip_size_kb": 64, 00:14:54.323 "state": "online", 00:14:54.323 "raid_level": "raid5f", 00:14:54.323 "superblock": false, 00:14:54.323 "num_base_bdevs": 3, 00:14:54.323 "num_base_bdevs_discovered": 3, 00:14:54.323 "num_base_bdevs_operational": 3, 00:14:54.323 "process": { 00:14:54.323 "type": "rebuild", 00:14:54.323 "target": "spare", 00:14:54.323 "progress": { 00:14:54.323 "blocks": 47104, 00:14:54.323 "percent": 35 00:14:54.323 } 00:14:54.323 }, 00:14:54.323 "base_bdevs_list": [ 00:14:54.323 { 00:14:54.323 "name": "spare", 00:14:54.323 "uuid": "3dfb62b9-dd31-5051-bea0-f7daf4390e14", 00:14:54.323 "is_configured": true, 00:14:54.323 "data_offset": 0, 00:14:54.323 "data_size": 65536 00:14:54.323 }, 00:14:54.323 { 00:14:54.323 "name": "BaseBdev2", 00:14:54.323 "uuid": "abf48985-f6d6-5ecc-8323-f4973285d464", 00:14:54.323 "is_configured": true, 00:14:54.323 "data_offset": 0, 00:14:54.323 "data_size": 65536 00:14:54.323 }, 00:14:54.323 { 00:14:54.323 "name": "BaseBdev3", 00:14:54.323 "uuid": "b2c4ed90-9712-5e08-97b7-399286292739", 00:14:54.323 "is_configured": true, 00:14:54.323 "data_offset": 0, 00:14:54.323 "data_size": 65536 00:14:54.323 } 00:14:54.323 ] 00:14:54.323 }' 00:14:54.323 03:23:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.323 03:23:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:54.323 03:23:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.323 03:23:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:54.323 03:23:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:55.698 03:23:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:55.698 03:23:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:55.698 03:23:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.698 03:23:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:55.698 03:23:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:55.698 03:23:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.698 03:23:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.698 03:23:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.698 03:23:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.698 03:23:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.698 03:23:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.698 03:23:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.698 "name": "raid_bdev1", 00:14:55.698 "uuid": "f67f3bb9-8244-401a-a97f-a982b3d95f16", 00:14:55.698 "strip_size_kb": 64, 00:14:55.698 "state": "online", 00:14:55.698 "raid_level": "raid5f", 00:14:55.698 "superblock": false, 00:14:55.698 "num_base_bdevs": 3, 00:14:55.698 "num_base_bdevs_discovered": 3, 00:14:55.698 "num_base_bdevs_operational": 3, 00:14:55.698 "process": { 00:14:55.698 "type": "rebuild", 00:14:55.698 "target": "spare", 00:14:55.698 "progress": { 00:14:55.698 "blocks": 69632, 00:14:55.698 "percent": 53 00:14:55.698 } 00:14:55.698 }, 00:14:55.698 "base_bdevs_list": [ 00:14:55.698 { 00:14:55.698 "name": "spare", 00:14:55.698 "uuid": "3dfb62b9-dd31-5051-bea0-f7daf4390e14", 00:14:55.698 "is_configured": true, 00:14:55.698 "data_offset": 0, 00:14:55.698 "data_size": 65536 00:14:55.698 }, 00:14:55.698 { 00:14:55.698 "name": "BaseBdev2", 00:14:55.698 "uuid": "abf48985-f6d6-5ecc-8323-f4973285d464", 00:14:55.698 "is_configured": true, 00:14:55.698 "data_offset": 0, 00:14:55.698 "data_size": 65536 00:14:55.698 }, 00:14:55.698 { 00:14:55.698 "name": "BaseBdev3", 00:14:55.698 "uuid": "b2c4ed90-9712-5e08-97b7-399286292739", 00:14:55.698 "is_configured": true, 00:14:55.698 "data_offset": 0, 00:14:55.698 "data_size": 65536 00:14:55.698 } 00:14:55.698 ] 00:14:55.698 }' 00:14:55.698 03:23:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.698 03:23:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:55.698 03:23:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.698 03:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:55.698 03:23:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:56.634 03:23:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:56.634 03:23:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.634 03:23:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.634 03:23:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.634 03:23:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.634 03:23:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.634 03:23:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.634 03:23:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.634 03:23:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.634 03:23:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.634 03:23:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.634 03:23:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.634 "name": "raid_bdev1", 00:14:56.634 "uuid": "f67f3bb9-8244-401a-a97f-a982b3d95f16", 00:14:56.634 "strip_size_kb": 64, 00:14:56.634 "state": "online", 00:14:56.634 "raid_level": "raid5f", 00:14:56.634 "superblock": false, 00:14:56.634 "num_base_bdevs": 3, 00:14:56.634 "num_base_bdevs_discovered": 3, 00:14:56.634 "num_base_bdevs_operational": 3, 00:14:56.634 "process": { 00:14:56.634 "type": "rebuild", 00:14:56.634 "target": "spare", 00:14:56.634 "progress": { 00:14:56.634 "blocks": 92160, 00:14:56.634 "percent": 70 00:14:56.634 } 00:14:56.634 }, 00:14:56.634 "base_bdevs_list": [ 00:14:56.634 { 00:14:56.634 "name": "spare", 00:14:56.634 "uuid": "3dfb62b9-dd31-5051-bea0-f7daf4390e14", 00:14:56.634 "is_configured": true, 00:14:56.634 "data_offset": 0, 00:14:56.634 "data_size": 65536 00:14:56.634 }, 00:14:56.634 { 00:14:56.634 "name": "BaseBdev2", 00:14:56.634 "uuid": "abf48985-f6d6-5ecc-8323-f4973285d464", 00:14:56.634 "is_configured": true, 00:14:56.634 "data_offset": 0, 00:14:56.634 "data_size": 65536 00:14:56.634 }, 00:14:56.634 { 00:14:56.634 "name": "BaseBdev3", 00:14:56.634 "uuid": "b2c4ed90-9712-5e08-97b7-399286292739", 00:14:56.634 "is_configured": true, 00:14:56.634 "data_offset": 0, 00:14:56.634 "data_size": 65536 00:14:56.634 } 00:14:56.634 ] 00:14:56.634 }' 00:14:56.634 03:23:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.634 03:23:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:56.634 03:23:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.892 03:23:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:56.892 03:23:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:57.827 03:23:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:57.827 03:23:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:57.827 03:23:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:57.827 03:23:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:57.827 03:23:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:57.827 03:23:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:57.827 03:23:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.827 03:23:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.827 03:23:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.827 03:23:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.827 03:23:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.827 03:23:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.827 "name": "raid_bdev1", 00:14:57.827 "uuid": "f67f3bb9-8244-401a-a97f-a982b3d95f16", 00:14:57.827 "strip_size_kb": 64, 00:14:57.827 "state": "online", 00:14:57.827 "raid_level": "raid5f", 00:14:57.827 "superblock": false, 00:14:57.827 "num_base_bdevs": 3, 00:14:57.827 "num_base_bdevs_discovered": 3, 00:14:57.827 "num_base_bdevs_operational": 3, 00:14:57.827 "process": { 00:14:57.827 "type": "rebuild", 00:14:57.827 "target": "spare", 00:14:57.827 "progress": { 00:14:57.827 "blocks": 116736, 00:14:57.827 "percent": 89 00:14:57.827 } 00:14:57.827 }, 00:14:57.827 "base_bdevs_list": [ 00:14:57.827 { 00:14:57.827 "name": "spare", 00:14:57.827 "uuid": "3dfb62b9-dd31-5051-bea0-f7daf4390e14", 00:14:57.827 "is_configured": true, 00:14:57.827 "data_offset": 0, 00:14:57.827 "data_size": 65536 00:14:57.827 }, 00:14:57.827 { 00:14:57.827 "name": "BaseBdev2", 00:14:57.827 "uuid": "abf48985-f6d6-5ecc-8323-f4973285d464", 00:14:57.827 "is_configured": true, 00:14:57.827 "data_offset": 0, 00:14:57.827 "data_size": 65536 00:14:57.827 }, 00:14:57.827 { 00:14:57.827 "name": "BaseBdev3", 00:14:57.827 "uuid": "b2c4ed90-9712-5e08-97b7-399286292739", 00:14:57.827 "is_configured": true, 00:14:57.827 "data_offset": 0, 00:14:57.827 "data_size": 65536 00:14:57.827 } 00:14:57.827 ] 00:14:57.827 }' 00:14:57.827 03:23:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.827 03:23:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:57.827 03:23:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.827 03:23:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:57.827 03:23:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:58.394 [2024-11-21 03:23:45.877029] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:58.394 [2024-11-21 03:23:45.877182] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:58.394 [2024-11-21 03:23:45.877242] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.974 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:58.974 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:58.974 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.974 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:58.974 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:58.974 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.974 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.974 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.974 03:23:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.974 03:23:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.974 03:23:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.974 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.974 "name": "raid_bdev1", 00:14:58.974 "uuid": "f67f3bb9-8244-401a-a97f-a982b3d95f16", 00:14:58.974 "strip_size_kb": 64, 00:14:58.974 "state": "online", 00:14:58.974 "raid_level": "raid5f", 00:14:58.974 "superblock": false, 00:14:58.974 "num_base_bdevs": 3, 00:14:58.974 "num_base_bdevs_discovered": 3, 00:14:58.974 "num_base_bdevs_operational": 3, 00:14:58.974 "base_bdevs_list": [ 00:14:58.974 { 00:14:58.974 "name": "spare", 00:14:58.974 "uuid": "3dfb62b9-dd31-5051-bea0-f7daf4390e14", 00:14:58.974 "is_configured": true, 00:14:58.974 "data_offset": 0, 00:14:58.974 "data_size": 65536 00:14:58.974 }, 00:14:58.974 { 00:14:58.974 "name": "BaseBdev2", 00:14:58.974 "uuid": "abf48985-f6d6-5ecc-8323-f4973285d464", 00:14:58.974 "is_configured": true, 00:14:58.974 "data_offset": 0, 00:14:58.974 "data_size": 65536 00:14:58.974 }, 00:14:58.974 { 00:14:58.974 "name": "BaseBdev3", 00:14:58.974 "uuid": "b2c4ed90-9712-5e08-97b7-399286292739", 00:14:58.974 "is_configured": true, 00:14:58.974 "data_offset": 0, 00:14:58.974 "data_size": 65536 00:14:58.974 } 00:14:58.974 ] 00:14:58.974 }' 00:14:58.974 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.974 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:58.974 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.974 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:58.975 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:58.975 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:58.975 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.975 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:58.975 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:58.975 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.975 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.975 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.975 03:23:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.975 03:23:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.975 03:23:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.234 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.234 "name": "raid_bdev1", 00:14:59.234 "uuid": "f67f3bb9-8244-401a-a97f-a982b3d95f16", 00:14:59.234 "strip_size_kb": 64, 00:14:59.234 "state": "online", 00:14:59.234 "raid_level": "raid5f", 00:14:59.234 "superblock": false, 00:14:59.234 "num_base_bdevs": 3, 00:14:59.234 "num_base_bdevs_discovered": 3, 00:14:59.234 "num_base_bdevs_operational": 3, 00:14:59.234 "base_bdevs_list": [ 00:14:59.234 { 00:14:59.234 "name": "spare", 00:14:59.234 "uuid": "3dfb62b9-dd31-5051-bea0-f7daf4390e14", 00:14:59.234 "is_configured": true, 00:14:59.234 "data_offset": 0, 00:14:59.234 "data_size": 65536 00:14:59.234 }, 00:14:59.234 { 00:14:59.234 "name": "BaseBdev2", 00:14:59.234 "uuid": "abf48985-f6d6-5ecc-8323-f4973285d464", 00:14:59.234 "is_configured": true, 00:14:59.234 "data_offset": 0, 00:14:59.234 "data_size": 65536 00:14:59.234 }, 00:14:59.234 { 00:14:59.234 "name": "BaseBdev3", 00:14:59.234 "uuid": "b2c4ed90-9712-5e08-97b7-399286292739", 00:14:59.234 "is_configured": true, 00:14:59.234 "data_offset": 0, 00:14:59.234 "data_size": 65536 00:14:59.234 } 00:14:59.234 ] 00:14:59.234 }' 00:14:59.234 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.234 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:59.234 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.234 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:59.234 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:59.234 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.234 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.234 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:59.234 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.234 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.234 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.234 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.234 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.234 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.234 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.234 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.234 03:23:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.234 03:23:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.234 03:23:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.234 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.234 "name": "raid_bdev1", 00:14:59.234 "uuid": "f67f3bb9-8244-401a-a97f-a982b3d95f16", 00:14:59.234 "strip_size_kb": 64, 00:14:59.234 "state": "online", 00:14:59.234 "raid_level": "raid5f", 00:14:59.234 "superblock": false, 00:14:59.234 "num_base_bdevs": 3, 00:14:59.234 "num_base_bdevs_discovered": 3, 00:14:59.234 "num_base_bdevs_operational": 3, 00:14:59.234 "base_bdevs_list": [ 00:14:59.234 { 00:14:59.234 "name": "spare", 00:14:59.234 "uuid": "3dfb62b9-dd31-5051-bea0-f7daf4390e14", 00:14:59.234 "is_configured": true, 00:14:59.234 "data_offset": 0, 00:14:59.234 "data_size": 65536 00:14:59.234 }, 00:14:59.234 { 00:14:59.234 "name": "BaseBdev2", 00:14:59.234 "uuid": "abf48985-f6d6-5ecc-8323-f4973285d464", 00:14:59.234 "is_configured": true, 00:14:59.234 "data_offset": 0, 00:14:59.234 "data_size": 65536 00:14:59.234 }, 00:14:59.234 { 00:14:59.234 "name": "BaseBdev3", 00:14:59.234 "uuid": "b2c4ed90-9712-5e08-97b7-399286292739", 00:14:59.234 "is_configured": true, 00:14:59.234 "data_offset": 0, 00:14:59.234 "data_size": 65536 00:14:59.234 } 00:14:59.234 ] 00:14:59.234 }' 00:14:59.234 03:23:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.234 03:23:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.801 03:23:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:59.801 03:23:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.801 03:23:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.801 [2024-11-21 03:23:47.087752] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:59.801 [2024-11-21 03:23:47.087795] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:59.801 [2024-11-21 03:23:47.087919] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:59.801 [2024-11-21 03:23:47.088033] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:59.801 [2024-11-21 03:23:47.088053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:14:59.801 03:23:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.801 03:23:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.801 03:23:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:59.801 03:23:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.801 03:23:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.801 03:23:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.801 03:23:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:59.801 03:23:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:59.801 03:23:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:59.801 03:23:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:59.801 03:23:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:59.801 03:23:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:59.801 03:23:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:59.801 03:23:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:59.801 03:23:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:59.801 03:23:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:59.801 03:23:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:59.801 03:23:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:59.801 03:23:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:00.061 /dev/nbd0 00:15:00.061 03:23:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:00.061 03:23:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:00.061 03:23:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:00.061 03:23:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:00.061 03:23:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:00.061 03:23:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:00.061 03:23:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:00.061 03:23:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:00.061 03:23:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:00.061 03:23:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:00.061 03:23:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:00.061 1+0 records in 00:15:00.061 1+0 records out 00:15:00.061 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304059 s, 13.5 MB/s 00:15:00.061 03:23:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:00.061 03:23:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:00.061 03:23:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:00.061 03:23:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:00.061 03:23:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:00.061 03:23:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:00.061 03:23:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:00.061 03:23:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:00.321 /dev/nbd1 00:15:00.321 03:23:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:00.321 03:23:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:00.321 03:23:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:00.321 03:23:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:00.321 03:23:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:00.321 03:23:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:00.321 03:23:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:00.321 03:23:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:00.321 03:23:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:00.321 03:23:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:00.321 03:23:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:00.321 1+0 records in 00:15:00.321 1+0 records out 00:15:00.321 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303296 s, 13.5 MB/s 00:15:00.321 03:23:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:00.321 03:23:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:00.321 03:23:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:00.321 03:23:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:00.321 03:23:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:00.321 03:23:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:00.321 03:23:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:00.321 03:23:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:00.321 03:23:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:00.321 03:23:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:00.321 03:23:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:00.321 03:23:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:00.321 03:23:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:00.321 03:23:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:00.321 03:23:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:00.581 03:23:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:00.581 03:23:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:00.581 03:23:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:00.581 03:23:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:00.581 03:23:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:00.581 03:23:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:00.581 03:23:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:00.581 03:23:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:00.581 03:23:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:00.581 03:23:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:00.843 03:23:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:00.843 03:23:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:00.843 03:23:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:00.843 03:23:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:00.843 03:23:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:00.843 03:23:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:00.843 03:23:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:00.843 03:23:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:00.843 03:23:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:00.843 03:23:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 94137 00:15:00.843 03:23:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 94137 ']' 00:15:00.843 03:23:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 94137 00:15:00.843 03:23:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:00.843 03:23:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:00.843 03:23:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94137 00:15:00.843 03:23:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:00.843 03:23:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:00.843 killing process with pid 94137 00:15:00.843 03:23:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94137' 00:15:00.843 03:23:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 94137 00:15:00.843 Received shutdown signal, test time was about 60.000000 seconds 00:15:00.843 00:15:00.843 Latency(us) 00:15:00.843 [2024-11-21T03:23:48.409Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.843 [2024-11-21T03:23:48.409Z] =================================================================================================================== 00:15:00.843 [2024-11-21T03:23:48.409Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:00.843 [2024-11-21 03:23:48.291123] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:00.843 03:23:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 94137 00:15:00.843 [2024-11-21 03:23:48.331018] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:01.105 00:15:01.105 real 0m13.878s 00:15:01.105 user 0m17.421s 00:15:01.105 sys 0m2.028s 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.105 ************************************ 00:15:01.105 END TEST raid5f_rebuild_test 00:15:01.105 ************************************ 00:15:01.105 03:23:48 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:15:01.105 03:23:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:01.105 03:23:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.105 03:23:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:01.105 ************************************ 00:15:01.105 START TEST raid5f_rebuild_test_sb 00:15:01.105 ************************************ 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=94560 00:15:01.105 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 94560 00:15:01.106 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:01.106 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 94560 ']' 00:15:01.106 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.106 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:01.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.106 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.106 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:01.106 03:23:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.364 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:01.364 Zero copy mechanism will not be used. 00:15:01.364 [2024-11-21 03:23:48.725084] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:15:01.364 [2024-11-21 03:23:48.725259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94560 ] 00:15:01.364 [2024-11-21 03:23:48.879965] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:01.364 [2024-11-21 03:23:48.918465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.623 [2024-11-21 03:23:48.948621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.623 [2024-11-21 03:23:48.991811] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.623 [2024-11-21 03:23:48.991868] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:02.191 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:02.191 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:02.191 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:02.191 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.192 BaseBdev1_malloc 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.192 [2024-11-21 03:23:49.632199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:02.192 [2024-11-21 03:23:49.632267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.192 [2024-11-21 03:23:49.632322] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:02.192 [2024-11-21 03:23:49.632343] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.192 [2024-11-21 03:23:49.634717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.192 [2024-11-21 03:23:49.634759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:02.192 BaseBdev1 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.192 BaseBdev2_malloc 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.192 [2024-11-21 03:23:49.661379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:02.192 [2024-11-21 03:23:49.661443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.192 [2024-11-21 03:23:49.661463] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:02.192 [2024-11-21 03:23:49.661474] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.192 [2024-11-21 03:23:49.663864] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.192 [2024-11-21 03:23:49.663908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:02.192 BaseBdev2 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.192 BaseBdev3_malloc 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.192 [2024-11-21 03:23:49.690457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:02.192 [2024-11-21 03:23:49.690518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.192 [2024-11-21 03:23:49.690542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:02.192 [2024-11-21 03:23:49.690552] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.192 [2024-11-21 03:23:49.692982] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.192 [2024-11-21 03:23:49.693105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:02.192 BaseBdev3 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.192 spare_malloc 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.192 spare_delay 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.192 [2024-11-21 03:23:49.736769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:02.192 [2024-11-21 03:23:49.736886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.192 [2024-11-21 03:23:49.736928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:02.192 [2024-11-21 03:23:49.736941] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.192 [2024-11-21 03:23:49.739426] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.192 [2024-11-21 03:23:49.739472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:02.192 spare 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.192 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.192 [2024-11-21 03:23:49.748828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:02.192 [2024-11-21 03:23:49.750995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:02.192 [2024-11-21 03:23:49.751071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:02.192 [2024-11-21 03:23:49.751260] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:02.192 [2024-11-21 03:23:49.751316] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:02.192 [2024-11-21 03:23:49.751643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:02.192 [2024-11-21 03:23:49.752118] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:02.192 [2024-11-21 03:23:49.752135] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:02.192 [2024-11-21 03:23:49.752302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.452 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.452 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:02.452 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.452 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.452 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.452 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.452 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.452 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.452 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.452 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.452 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.453 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.453 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.453 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.453 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.453 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.453 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.453 "name": "raid_bdev1", 00:15:02.453 "uuid": "6be5d739-36d3-4916-8ddb-5c3d75f2cdf7", 00:15:02.453 "strip_size_kb": 64, 00:15:02.453 "state": "online", 00:15:02.453 "raid_level": "raid5f", 00:15:02.453 "superblock": true, 00:15:02.453 "num_base_bdevs": 3, 00:15:02.453 "num_base_bdevs_discovered": 3, 00:15:02.453 "num_base_bdevs_operational": 3, 00:15:02.453 "base_bdevs_list": [ 00:15:02.453 { 00:15:02.453 "name": "BaseBdev1", 00:15:02.453 "uuid": "eacfb758-e350-5aba-952b-29d19396150b", 00:15:02.453 "is_configured": true, 00:15:02.453 "data_offset": 2048, 00:15:02.453 "data_size": 63488 00:15:02.453 }, 00:15:02.453 { 00:15:02.453 "name": "BaseBdev2", 00:15:02.453 "uuid": "74115d42-9c4f-5682-b66c-0f3b09595570", 00:15:02.453 "is_configured": true, 00:15:02.453 "data_offset": 2048, 00:15:02.453 "data_size": 63488 00:15:02.453 }, 00:15:02.453 { 00:15:02.453 "name": "BaseBdev3", 00:15:02.453 "uuid": "bb1a4629-004e-590f-be20-dbff1e8af980", 00:15:02.453 "is_configured": true, 00:15:02.453 "data_offset": 2048, 00:15:02.453 "data_size": 63488 00:15:02.453 } 00:15:02.453 ] 00:15:02.453 }' 00:15:02.453 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.453 03:23:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.712 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:02.712 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.712 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:02.712 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.712 [2024-11-21 03:23:50.206149] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.712 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.712 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:15:02.712 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.712 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.712 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.712 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:02.712 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.972 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:02.972 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:02.972 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:02.972 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:02.972 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:02.972 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:02.972 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:02.972 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:02.972 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:02.972 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:02.972 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:02.972 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:02.972 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:02.972 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:02.972 [2024-11-21 03:23:50.502059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:15:02.972 /dev/nbd0 00:15:03.233 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:03.233 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:03.233 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:03.233 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:03.233 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:03.233 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:03.233 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:03.233 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:03.233 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:03.233 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:03.233 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:03.233 1+0 records in 00:15:03.233 1+0 records out 00:15:03.233 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000576447 s, 7.1 MB/s 00:15:03.233 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.233 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:03.233 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.233 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:03.233 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:03.233 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:03.233 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:03.233 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:03.233 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:03.233 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:03.233 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:15:03.493 496+0 records in 00:15:03.493 496+0 records out 00:15:03.493 65011712 bytes (65 MB, 62 MiB) copied, 0.33574 s, 194 MB/s 00:15:03.493 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:03.493 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:03.493 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:03.493 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:03.493 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:03.493 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:03.493 03:23:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:03.754 [2024-11-21 03:23:51.146306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.754 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:03.754 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:03.754 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:03.754 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:03.754 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:03.754 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:03.754 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:03.754 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:03.754 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:03.754 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.754 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.754 [2024-11-21 03:23:51.170439] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:03.754 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.754 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:03.754 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.754 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.754 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.754 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.754 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:03.754 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.754 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.754 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.754 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.754 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.754 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.754 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.754 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.754 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.754 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.754 "name": "raid_bdev1", 00:15:03.754 "uuid": "6be5d739-36d3-4916-8ddb-5c3d75f2cdf7", 00:15:03.754 "strip_size_kb": 64, 00:15:03.754 "state": "online", 00:15:03.754 "raid_level": "raid5f", 00:15:03.754 "superblock": true, 00:15:03.754 "num_base_bdevs": 3, 00:15:03.754 "num_base_bdevs_discovered": 2, 00:15:03.754 "num_base_bdevs_operational": 2, 00:15:03.754 "base_bdevs_list": [ 00:15:03.754 { 00:15:03.754 "name": null, 00:15:03.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.754 "is_configured": false, 00:15:03.754 "data_offset": 0, 00:15:03.754 "data_size": 63488 00:15:03.754 }, 00:15:03.754 { 00:15:03.754 "name": "BaseBdev2", 00:15:03.754 "uuid": "74115d42-9c4f-5682-b66c-0f3b09595570", 00:15:03.754 "is_configured": true, 00:15:03.754 "data_offset": 2048, 00:15:03.754 "data_size": 63488 00:15:03.754 }, 00:15:03.754 { 00:15:03.754 "name": "BaseBdev3", 00:15:03.754 "uuid": "bb1a4629-004e-590f-be20-dbff1e8af980", 00:15:03.754 "is_configured": true, 00:15:03.754 "data_offset": 2048, 00:15:03.754 "data_size": 63488 00:15:03.754 } 00:15:03.754 ] 00:15:03.754 }' 00:15:03.754 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.754 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.324 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:04.324 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.324 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.324 [2024-11-21 03:23:51.622608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:04.324 [2024-11-21 03:23:51.627666] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029390 00:15:04.324 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.324 03:23:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:04.324 [2024-11-21 03:23:51.630321] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:05.264 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.264 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.264 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.264 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.264 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.264 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.264 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.264 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.264 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.264 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.264 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.264 "name": "raid_bdev1", 00:15:05.264 "uuid": "6be5d739-36d3-4916-8ddb-5c3d75f2cdf7", 00:15:05.264 "strip_size_kb": 64, 00:15:05.264 "state": "online", 00:15:05.264 "raid_level": "raid5f", 00:15:05.264 "superblock": true, 00:15:05.264 "num_base_bdevs": 3, 00:15:05.264 "num_base_bdevs_discovered": 3, 00:15:05.264 "num_base_bdevs_operational": 3, 00:15:05.264 "process": { 00:15:05.264 "type": "rebuild", 00:15:05.264 "target": "spare", 00:15:05.264 "progress": { 00:15:05.264 "blocks": 20480, 00:15:05.264 "percent": 16 00:15:05.264 } 00:15:05.264 }, 00:15:05.264 "base_bdevs_list": [ 00:15:05.264 { 00:15:05.264 "name": "spare", 00:15:05.264 "uuid": "47528ca9-9a15-5970-8e6c-e4d4979040ef", 00:15:05.264 "is_configured": true, 00:15:05.264 "data_offset": 2048, 00:15:05.264 "data_size": 63488 00:15:05.264 }, 00:15:05.264 { 00:15:05.264 "name": "BaseBdev2", 00:15:05.264 "uuid": "74115d42-9c4f-5682-b66c-0f3b09595570", 00:15:05.264 "is_configured": true, 00:15:05.264 "data_offset": 2048, 00:15:05.264 "data_size": 63488 00:15:05.264 }, 00:15:05.264 { 00:15:05.264 "name": "BaseBdev3", 00:15:05.264 "uuid": "bb1a4629-004e-590f-be20-dbff1e8af980", 00:15:05.264 "is_configured": true, 00:15:05.264 "data_offset": 2048, 00:15:05.264 "data_size": 63488 00:15:05.264 } 00:15:05.264 ] 00:15:05.264 }' 00:15:05.264 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.264 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.264 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.264 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.264 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:05.264 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.264 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.264 [2024-11-21 03:23:52.788604] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:05.524 [2024-11-21 03:23:52.844043] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:05.525 [2024-11-21 03:23:52.844131] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.525 [2024-11-21 03:23:52.844155] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:05.525 [2024-11-21 03:23:52.844165] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:05.525 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.525 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:05.525 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.525 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.525 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.525 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.525 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:05.525 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.525 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.525 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.525 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.525 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.525 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.525 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.525 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.525 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.525 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.525 "name": "raid_bdev1", 00:15:05.525 "uuid": "6be5d739-36d3-4916-8ddb-5c3d75f2cdf7", 00:15:05.525 "strip_size_kb": 64, 00:15:05.525 "state": "online", 00:15:05.525 "raid_level": "raid5f", 00:15:05.525 "superblock": true, 00:15:05.525 "num_base_bdevs": 3, 00:15:05.525 "num_base_bdevs_discovered": 2, 00:15:05.525 "num_base_bdevs_operational": 2, 00:15:05.525 "base_bdevs_list": [ 00:15:05.525 { 00:15:05.525 "name": null, 00:15:05.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.525 "is_configured": false, 00:15:05.525 "data_offset": 0, 00:15:05.525 "data_size": 63488 00:15:05.525 }, 00:15:05.525 { 00:15:05.525 "name": "BaseBdev2", 00:15:05.525 "uuid": "74115d42-9c4f-5682-b66c-0f3b09595570", 00:15:05.525 "is_configured": true, 00:15:05.525 "data_offset": 2048, 00:15:05.525 "data_size": 63488 00:15:05.525 }, 00:15:05.525 { 00:15:05.525 "name": "BaseBdev3", 00:15:05.525 "uuid": "bb1a4629-004e-590f-be20-dbff1e8af980", 00:15:05.525 "is_configured": true, 00:15:05.525 "data_offset": 2048, 00:15:05.525 "data_size": 63488 00:15:05.525 } 00:15:05.525 ] 00:15:05.525 }' 00:15:05.525 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.525 03:23:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.785 03:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:05.785 03:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.785 03:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:05.785 03:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:05.785 03:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.785 03:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.785 03:23:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.785 03:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.785 03:23:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.045 03:23:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.045 03:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.045 "name": "raid_bdev1", 00:15:06.045 "uuid": "6be5d739-36d3-4916-8ddb-5c3d75f2cdf7", 00:15:06.045 "strip_size_kb": 64, 00:15:06.045 "state": "online", 00:15:06.045 "raid_level": "raid5f", 00:15:06.045 "superblock": true, 00:15:06.045 "num_base_bdevs": 3, 00:15:06.045 "num_base_bdevs_discovered": 2, 00:15:06.045 "num_base_bdevs_operational": 2, 00:15:06.045 "base_bdevs_list": [ 00:15:06.045 { 00:15:06.045 "name": null, 00:15:06.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.045 "is_configured": false, 00:15:06.045 "data_offset": 0, 00:15:06.045 "data_size": 63488 00:15:06.045 }, 00:15:06.045 { 00:15:06.045 "name": "BaseBdev2", 00:15:06.045 "uuid": "74115d42-9c4f-5682-b66c-0f3b09595570", 00:15:06.045 "is_configured": true, 00:15:06.045 "data_offset": 2048, 00:15:06.045 "data_size": 63488 00:15:06.045 }, 00:15:06.045 { 00:15:06.045 "name": "BaseBdev3", 00:15:06.045 "uuid": "bb1a4629-004e-590f-be20-dbff1e8af980", 00:15:06.045 "is_configured": true, 00:15:06.045 "data_offset": 2048, 00:15:06.045 "data_size": 63488 00:15:06.045 } 00:15:06.045 ] 00:15:06.045 }' 00:15:06.045 03:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.045 03:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:06.045 03:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.045 03:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:06.045 03:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:06.045 03:23:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.045 03:23:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.045 [2024-11-21 03:23:53.475121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:06.045 [2024-11-21 03:23:53.479946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029460 00:15:06.045 03:23:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.045 03:23:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:06.045 [2024-11-21 03:23:53.482564] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:06.984 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.985 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.985 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.985 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.985 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.985 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.985 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.985 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.985 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.985 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.985 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.985 "name": "raid_bdev1", 00:15:06.985 "uuid": "6be5d739-36d3-4916-8ddb-5c3d75f2cdf7", 00:15:06.985 "strip_size_kb": 64, 00:15:06.985 "state": "online", 00:15:06.985 "raid_level": "raid5f", 00:15:06.985 "superblock": true, 00:15:06.985 "num_base_bdevs": 3, 00:15:06.985 "num_base_bdevs_discovered": 3, 00:15:06.985 "num_base_bdevs_operational": 3, 00:15:06.985 "process": { 00:15:06.985 "type": "rebuild", 00:15:06.985 "target": "spare", 00:15:06.985 "progress": { 00:15:06.985 "blocks": 18432, 00:15:06.985 "percent": 14 00:15:06.985 } 00:15:06.985 }, 00:15:06.985 "base_bdevs_list": [ 00:15:06.985 { 00:15:06.985 "name": "spare", 00:15:06.985 "uuid": "47528ca9-9a15-5970-8e6c-e4d4979040ef", 00:15:06.985 "is_configured": true, 00:15:06.985 "data_offset": 2048, 00:15:06.985 "data_size": 63488 00:15:06.985 }, 00:15:06.985 { 00:15:06.985 "name": "BaseBdev2", 00:15:06.985 "uuid": "74115d42-9c4f-5682-b66c-0f3b09595570", 00:15:06.985 "is_configured": true, 00:15:06.985 "data_offset": 2048, 00:15:06.985 "data_size": 63488 00:15:06.985 }, 00:15:06.985 { 00:15:06.985 "name": "BaseBdev3", 00:15:06.985 "uuid": "bb1a4629-004e-590f-be20-dbff1e8af980", 00:15:06.985 "is_configured": true, 00:15:06.985 "data_offset": 2048, 00:15:06.985 "data_size": 63488 00:15:06.985 } 00:15:06.985 ] 00:15:06.985 }' 00:15:06.985 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.245 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:07.245 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.245 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:07.245 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:07.245 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:07.245 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:07.245 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:07.245 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:07.245 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=469 00:15:07.245 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:07.245 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:07.245 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.245 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:07.245 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:07.245 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.245 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.245 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.245 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.245 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.245 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.245 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.245 "name": "raid_bdev1", 00:15:07.245 "uuid": "6be5d739-36d3-4916-8ddb-5c3d75f2cdf7", 00:15:07.245 "strip_size_kb": 64, 00:15:07.245 "state": "online", 00:15:07.245 "raid_level": "raid5f", 00:15:07.245 "superblock": true, 00:15:07.245 "num_base_bdevs": 3, 00:15:07.245 "num_base_bdevs_discovered": 3, 00:15:07.245 "num_base_bdevs_operational": 3, 00:15:07.245 "process": { 00:15:07.245 "type": "rebuild", 00:15:07.245 "target": "spare", 00:15:07.245 "progress": { 00:15:07.245 "blocks": 22528, 00:15:07.245 "percent": 17 00:15:07.245 } 00:15:07.245 }, 00:15:07.245 "base_bdevs_list": [ 00:15:07.245 { 00:15:07.245 "name": "spare", 00:15:07.245 "uuid": "47528ca9-9a15-5970-8e6c-e4d4979040ef", 00:15:07.245 "is_configured": true, 00:15:07.245 "data_offset": 2048, 00:15:07.245 "data_size": 63488 00:15:07.245 }, 00:15:07.245 { 00:15:07.245 "name": "BaseBdev2", 00:15:07.245 "uuid": "74115d42-9c4f-5682-b66c-0f3b09595570", 00:15:07.245 "is_configured": true, 00:15:07.245 "data_offset": 2048, 00:15:07.245 "data_size": 63488 00:15:07.245 }, 00:15:07.245 { 00:15:07.245 "name": "BaseBdev3", 00:15:07.245 "uuid": "bb1a4629-004e-590f-be20-dbff1e8af980", 00:15:07.245 "is_configured": true, 00:15:07.245 "data_offset": 2048, 00:15:07.245 "data_size": 63488 00:15:07.245 } 00:15:07.245 ] 00:15:07.245 }' 00:15:07.245 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.245 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:07.245 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.245 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:07.245 03:23:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:08.625 03:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:08.625 03:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.626 03:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.626 03:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.626 03:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.626 03:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.626 03:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.626 03:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.626 03:23:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.626 03:23:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.626 03:23:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.626 03:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.626 "name": "raid_bdev1", 00:15:08.626 "uuid": "6be5d739-36d3-4916-8ddb-5c3d75f2cdf7", 00:15:08.626 "strip_size_kb": 64, 00:15:08.626 "state": "online", 00:15:08.626 "raid_level": "raid5f", 00:15:08.626 "superblock": true, 00:15:08.626 "num_base_bdevs": 3, 00:15:08.626 "num_base_bdevs_discovered": 3, 00:15:08.626 "num_base_bdevs_operational": 3, 00:15:08.626 "process": { 00:15:08.626 "type": "rebuild", 00:15:08.626 "target": "spare", 00:15:08.626 "progress": { 00:15:08.626 "blocks": 45056, 00:15:08.626 "percent": 35 00:15:08.626 } 00:15:08.626 }, 00:15:08.626 "base_bdevs_list": [ 00:15:08.626 { 00:15:08.626 "name": "spare", 00:15:08.626 "uuid": "47528ca9-9a15-5970-8e6c-e4d4979040ef", 00:15:08.626 "is_configured": true, 00:15:08.626 "data_offset": 2048, 00:15:08.626 "data_size": 63488 00:15:08.626 }, 00:15:08.626 { 00:15:08.626 "name": "BaseBdev2", 00:15:08.626 "uuid": "74115d42-9c4f-5682-b66c-0f3b09595570", 00:15:08.626 "is_configured": true, 00:15:08.626 "data_offset": 2048, 00:15:08.626 "data_size": 63488 00:15:08.626 }, 00:15:08.626 { 00:15:08.626 "name": "BaseBdev3", 00:15:08.626 "uuid": "bb1a4629-004e-590f-be20-dbff1e8af980", 00:15:08.626 "is_configured": true, 00:15:08.626 "data_offset": 2048, 00:15:08.626 "data_size": 63488 00:15:08.626 } 00:15:08.626 ] 00:15:08.626 }' 00:15:08.626 03:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.626 03:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:08.626 03:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.626 03:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:08.626 03:23:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:09.633 03:23:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:09.633 03:23:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:09.633 03:23:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.633 03:23:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:09.633 03:23:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:09.633 03:23:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.633 03:23:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.633 03:23:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.633 03:23:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.633 03:23:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.633 03:23:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.633 03:23:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.633 "name": "raid_bdev1", 00:15:09.633 "uuid": "6be5d739-36d3-4916-8ddb-5c3d75f2cdf7", 00:15:09.633 "strip_size_kb": 64, 00:15:09.633 "state": "online", 00:15:09.633 "raid_level": "raid5f", 00:15:09.633 "superblock": true, 00:15:09.633 "num_base_bdevs": 3, 00:15:09.633 "num_base_bdevs_discovered": 3, 00:15:09.633 "num_base_bdevs_operational": 3, 00:15:09.633 "process": { 00:15:09.633 "type": "rebuild", 00:15:09.633 "target": "spare", 00:15:09.633 "progress": { 00:15:09.633 "blocks": 69632, 00:15:09.633 "percent": 54 00:15:09.633 } 00:15:09.633 }, 00:15:09.633 "base_bdevs_list": [ 00:15:09.633 { 00:15:09.633 "name": "spare", 00:15:09.633 "uuid": "47528ca9-9a15-5970-8e6c-e4d4979040ef", 00:15:09.633 "is_configured": true, 00:15:09.633 "data_offset": 2048, 00:15:09.633 "data_size": 63488 00:15:09.633 }, 00:15:09.633 { 00:15:09.633 "name": "BaseBdev2", 00:15:09.633 "uuid": "74115d42-9c4f-5682-b66c-0f3b09595570", 00:15:09.633 "is_configured": true, 00:15:09.633 "data_offset": 2048, 00:15:09.633 "data_size": 63488 00:15:09.633 }, 00:15:09.633 { 00:15:09.633 "name": "BaseBdev3", 00:15:09.633 "uuid": "bb1a4629-004e-590f-be20-dbff1e8af980", 00:15:09.633 "is_configured": true, 00:15:09.633 "data_offset": 2048, 00:15:09.633 "data_size": 63488 00:15:09.633 } 00:15:09.633 ] 00:15:09.633 }' 00:15:09.633 03:23:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.633 03:23:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:09.633 03:23:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.633 03:23:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.633 03:23:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:10.579 03:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:10.579 03:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:10.579 03:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.579 03:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:10.579 03:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:10.579 03:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.579 03:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.579 03:23:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.579 03:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.579 03:23:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.579 03:23:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.838 03:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.838 "name": "raid_bdev1", 00:15:10.838 "uuid": "6be5d739-36d3-4916-8ddb-5c3d75f2cdf7", 00:15:10.838 "strip_size_kb": 64, 00:15:10.838 "state": "online", 00:15:10.838 "raid_level": "raid5f", 00:15:10.838 "superblock": true, 00:15:10.838 "num_base_bdevs": 3, 00:15:10.838 "num_base_bdevs_discovered": 3, 00:15:10.838 "num_base_bdevs_operational": 3, 00:15:10.838 "process": { 00:15:10.838 "type": "rebuild", 00:15:10.838 "target": "spare", 00:15:10.838 "progress": { 00:15:10.838 "blocks": 92160, 00:15:10.838 "percent": 72 00:15:10.838 } 00:15:10.838 }, 00:15:10.838 "base_bdevs_list": [ 00:15:10.838 { 00:15:10.838 "name": "spare", 00:15:10.838 "uuid": "47528ca9-9a15-5970-8e6c-e4d4979040ef", 00:15:10.838 "is_configured": true, 00:15:10.838 "data_offset": 2048, 00:15:10.838 "data_size": 63488 00:15:10.838 }, 00:15:10.838 { 00:15:10.838 "name": "BaseBdev2", 00:15:10.838 "uuid": "74115d42-9c4f-5682-b66c-0f3b09595570", 00:15:10.838 "is_configured": true, 00:15:10.838 "data_offset": 2048, 00:15:10.838 "data_size": 63488 00:15:10.838 }, 00:15:10.838 { 00:15:10.838 "name": "BaseBdev3", 00:15:10.838 "uuid": "bb1a4629-004e-590f-be20-dbff1e8af980", 00:15:10.838 "is_configured": true, 00:15:10.838 "data_offset": 2048, 00:15:10.838 "data_size": 63488 00:15:10.838 } 00:15:10.838 ] 00:15:10.838 }' 00:15:10.838 03:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.838 03:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:10.838 03:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.838 03:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:10.838 03:23:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:11.826 03:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:11.826 03:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.826 03:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.826 03:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.826 03:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.826 03:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.826 03:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.826 03:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.826 03:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.826 03:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.826 03:23:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.826 03:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.826 "name": "raid_bdev1", 00:15:11.826 "uuid": "6be5d739-36d3-4916-8ddb-5c3d75f2cdf7", 00:15:11.826 "strip_size_kb": 64, 00:15:11.826 "state": "online", 00:15:11.826 "raid_level": "raid5f", 00:15:11.826 "superblock": true, 00:15:11.826 "num_base_bdevs": 3, 00:15:11.826 "num_base_bdevs_discovered": 3, 00:15:11.826 "num_base_bdevs_operational": 3, 00:15:11.826 "process": { 00:15:11.826 "type": "rebuild", 00:15:11.826 "target": "spare", 00:15:11.826 "progress": { 00:15:11.826 "blocks": 116736, 00:15:11.826 "percent": 91 00:15:11.826 } 00:15:11.826 }, 00:15:11.826 "base_bdevs_list": [ 00:15:11.826 { 00:15:11.826 "name": "spare", 00:15:11.826 "uuid": "47528ca9-9a15-5970-8e6c-e4d4979040ef", 00:15:11.826 "is_configured": true, 00:15:11.826 "data_offset": 2048, 00:15:11.826 "data_size": 63488 00:15:11.826 }, 00:15:11.826 { 00:15:11.826 "name": "BaseBdev2", 00:15:11.826 "uuid": "74115d42-9c4f-5682-b66c-0f3b09595570", 00:15:11.826 "is_configured": true, 00:15:11.826 "data_offset": 2048, 00:15:11.826 "data_size": 63488 00:15:11.826 }, 00:15:11.826 { 00:15:11.826 "name": "BaseBdev3", 00:15:11.826 "uuid": "bb1a4629-004e-590f-be20-dbff1e8af980", 00:15:11.826 "is_configured": true, 00:15:11.826 "data_offset": 2048, 00:15:11.826 "data_size": 63488 00:15:11.826 } 00:15:11.826 ] 00:15:11.826 }' 00:15:11.826 03:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.826 03:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:11.826 03:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.826 03:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:11.826 03:23:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:12.396 [2024-11-21 03:23:59.739241] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:12.396 [2024-11-21 03:23:59.739424] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:12.396 [2024-11-21 03:23:59.739570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.964 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:12.964 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:12.964 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.964 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:12.964 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:12.964 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.964 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.964 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.964 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.964 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.964 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.964 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.964 "name": "raid_bdev1", 00:15:12.964 "uuid": "6be5d739-36d3-4916-8ddb-5c3d75f2cdf7", 00:15:12.964 "strip_size_kb": 64, 00:15:12.964 "state": "online", 00:15:12.964 "raid_level": "raid5f", 00:15:12.964 "superblock": true, 00:15:12.964 "num_base_bdevs": 3, 00:15:12.964 "num_base_bdevs_discovered": 3, 00:15:12.964 "num_base_bdevs_operational": 3, 00:15:12.964 "base_bdevs_list": [ 00:15:12.964 { 00:15:12.964 "name": "spare", 00:15:12.964 "uuid": "47528ca9-9a15-5970-8e6c-e4d4979040ef", 00:15:12.964 "is_configured": true, 00:15:12.964 "data_offset": 2048, 00:15:12.964 "data_size": 63488 00:15:12.964 }, 00:15:12.964 { 00:15:12.964 "name": "BaseBdev2", 00:15:12.964 "uuid": "74115d42-9c4f-5682-b66c-0f3b09595570", 00:15:12.964 "is_configured": true, 00:15:12.964 "data_offset": 2048, 00:15:12.964 "data_size": 63488 00:15:12.964 }, 00:15:12.964 { 00:15:12.964 "name": "BaseBdev3", 00:15:12.964 "uuid": "bb1a4629-004e-590f-be20-dbff1e8af980", 00:15:12.964 "is_configured": true, 00:15:12.964 "data_offset": 2048, 00:15:12.964 "data_size": 63488 00:15:12.964 } 00:15:12.964 ] 00:15:12.964 }' 00:15:12.964 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.964 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:12.964 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.964 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:12.964 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:12.964 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:12.964 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.964 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:12.964 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:12.964 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.222 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.222 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.222 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.222 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.222 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.222 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.223 "name": "raid_bdev1", 00:15:13.223 "uuid": "6be5d739-36d3-4916-8ddb-5c3d75f2cdf7", 00:15:13.223 "strip_size_kb": 64, 00:15:13.223 "state": "online", 00:15:13.223 "raid_level": "raid5f", 00:15:13.223 "superblock": true, 00:15:13.223 "num_base_bdevs": 3, 00:15:13.223 "num_base_bdevs_discovered": 3, 00:15:13.223 "num_base_bdevs_operational": 3, 00:15:13.223 "base_bdevs_list": [ 00:15:13.223 { 00:15:13.223 "name": "spare", 00:15:13.223 "uuid": "47528ca9-9a15-5970-8e6c-e4d4979040ef", 00:15:13.223 "is_configured": true, 00:15:13.223 "data_offset": 2048, 00:15:13.223 "data_size": 63488 00:15:13.223 }, 00:15:13.223 { 00:15:13.223 "name": "BaseBdev2", 00:15:13.223 "uuid": "74115d42-9c4f-5682-b66c-0f3b09595570", 00:15:13.223 "is_configured": true, 00:15:13.223 "data_offset": 2048, 00:15:13.223 "data_size": 63488 00:15:13.223 }, 00:15:13.223 { 00:15:13.223 "name": "BaseBdev3", 00:15:13.223 "uuid": "bb1a4629-004e-590f-be20-dbff1e8af980", 00:15:13.223 "is_configured": true, 00:15:13.223 "data_offset": 2048, 00:15:13.223 "data_size": 63488 00:15:13.223 } 00:15:13.223 ] 00:15:13.223 }' 00:15:13.223 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.223 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:13.223 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.223 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:13.223 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:13.223 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.223 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.223 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.223 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.223 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:13.223 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.223 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.223 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.223 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.223 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.223 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.223 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.223 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.223 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.223 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.223 "name": "raid_bdev1", 00:15:13.223 "uuid": "6be5d739-36d3-4916-8ddb-5c3d75f2cdf7", 00:15:13.223 "strip_size_kb": 64, 00:15:13.223 "state": "online", 00:15:13.223 "raid_level": "raid5f", 00:15:13.223 "superblock": true, 00:15:13.223 "num_base_bdevs": 3, 00:15:13.223 "num_base_bdevs_discovered": 3, 00:15:13.223 "num_base_bdevs_operational": 3, 00:15:13.223 "base_bdevs_list": [ 00:15:13.223 { 00:15:13.223 "name": "spare", 00:15:13.223 "uuid": "47528ca9-9a15-5970-8e6c-e4d4979040ef", 00:15:13.223 "is_configured": true, 00:15:13.223 "data_offset": 2048, 00:15:13.223 "data_size": 63488 00:15:13.223 }, 00:15:13.223 { 00:15:13.223 "name": "BaseBdev2", 00:15:13.223 "uuid": "74115d42-9c4f-5682-b66c-0f3b09595570", 00:15:13.223 "is_configured": true, 00:15:13.223 "data_offset": 2048, 00:15:13.223 "data_size": 63488 00:15:13.223 }, 00:15:13.223 { 00:15:13.223 "name": "BaseBdev3", 00:15:13.223 "uuid": "bb1a4629-004e-590f-be20-dbff1e8af980", 00:15:13.223 "is_configured": true, 00:15:13.223 "data_offset": 2048, 00:15:13.223 "data_size": 63488 00:15:13.223 } 00:15:13.223 ] 00:15:13.223 }' 00:15:13.223 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.223 03:24:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.791 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:13.791 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.791 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.791 [2024-11-21 03:24:01.056927] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:13.791 [2024-11-21 03:24:01.057052] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:13.791 [2024-11-21 03:24:01.057149] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:13.791 [2024-11-21 03:24:01.057237] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:13.791 [2024-11-21 03:24:01.057253] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:15:13.791 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.791 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.791 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.791 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:13.791 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.791 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.791 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:13.791 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:13.791 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:13.791 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:13.791 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:13.791 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:13.791 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:13.791 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:13.791 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:13.791 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:13.791 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:13.791 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:13.791 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:13.791 /dev/nbd0 00:15:13.791 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:13.791 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:13.791 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:13.791 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:13.791 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:13.791 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:13.791 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:13.791 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:13.791 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:13.791 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:13.791 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:14.050 1+0 records in 00:15:14.050 1+0 records out 00:15:14.050 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297544 s, 13.8 MB/s 00:15:14.050 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.050 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:14.050 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.050 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:14.050 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:14.050 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:14.050 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:14.050 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:14.050 /dev/nbd1 00:15:14.050 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:14.050 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:14.050 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:14.050 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:14.050 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:14.050 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:14.050 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:14.050 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:14.050 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:14.050 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:14.050 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:14.050 1+0 records in 00:15:14.050 1+0 records out 00:15:14.050 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033153 s, 12.4 MB/s 00:15:14.050 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.050 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:14.050 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.050 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:14.050 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:14.050 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:14.050 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:14.050 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:14.308 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:14.308 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:14.308 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:14.308 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:14.308 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:14.308 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:14.309 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:14.568 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:14.568 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:14.568 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:14.568 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:14.568 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:14.568 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:14.568 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:14.568 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:14.568 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:14.568 03:24:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:14.568 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:14.568 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:14.568 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:14.568 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:14.568 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:14.568 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:14.568 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:14.568 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:14.568 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:14.568 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:14.568 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.568 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.568 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.568 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:14.568 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.568 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.568 [2024-11-21 03:24:02.116425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:14.568 [2024-11-21 03:24:02.116487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.568 [2024-11-21 03:24:02.116507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:14.568 [2024-11-21 03:24:02.116518] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.568 [2024-11-21 03:24:02.118838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.568 [2024-11-21 03:24:02.118905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:14.568 [2024-11-21 03:24:02.118991] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:14.568 [2024-11-21 03:24:02.119062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:14.568 [2024-11-21 03:24:02.119195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:14.568 [2024-11-21 03:24:02.119295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:14.568 spare 00:15:14.568 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.568 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:14.568 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.568 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.828 [2024-11-21 03:24:02.219370] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:14.828 [2024-11-21 03:24:02.219400] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:14.828 [2024-11-21 03:24:02.219657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047b10 00:15:14.828 [2024-11-21 03:24:02.220109] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:14.828 [2024-11-21 03:24:02.220124] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:14.828 [2024-11-21 03:24:02.220291] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.828 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.828 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:14.828 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.828 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.828 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.828 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.828 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:14.828 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.828 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.828 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.828 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.828 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.828 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.828 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.828 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.828 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.828 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.828 "name": "raid_bdev1", 00:15:14.828 "uuid": "6be5d739-36d3-4916-8ddb-5c3d75f2cdf7", 00:15:14.828 "strip_size_kb": 64, 00:15:14.828 "state": "online", 00:15:14.828 "raid_level": "raid5f", 00:15:14.828 "superblock": true, 00:15:14.828 "num_base_bdevs": 3, 00:15:14.828 "num_base_bdevs_discovered": 3, 00:15:14.828 "num_base_bdevs_operational": 3, 00:15:14.828 "base_bdevs_list": [ 00:15:14.828 { 00:15:14.828 "name": "spare", 00:15:14.828 "uuid": "47528ca9-9a15-5970-8e6c-e4d4979040ef", 00:15:14.828 "is_configured": true, 00:15:14.828 "data_offset": 2048, 00:15:14.828 "data_size": 63488 00:15:14.828 }, 00:15:14.828 { 00:15:14.828 "name": "BaseBdev2", 00:15:14.828 "uuid": "74115d42-9c4f-5682-b66c-0f3b09595570", 00:15:14.828 "is_configured": true, 00:15:14.828 "data_offset": 2048, 00:15:14.828 "data_size": 63488 00:15:14.828 }, 00:15:14.828 { 00:15:14.828 "name": "BaseBdev3", 00:15:14.828 "uuid": "bb1a4629-004e-590f-be20-dbff1e8af980", 00:15:14.828 "is_configured": true, 00:15:14.828 "data_offset": 2048, 00:15:14.828 "data_size": 63488 00:15:14.828 } 00:15:14.828 ] 00:15:14.828 }' 00:15:14.828 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.828 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.088 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:15.088 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.088 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:15.088 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:15.088 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.088 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.088 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.088 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.348 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.348 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.348 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.348 "name": "raid_bdev1", 00:15:15.348 "uuid": "6be5d739-36d3-4916-8ddb-5c3d75f2cdf7", 00:15:15.348 "strip_size_kb": 64, 00:15:15.348 "state": "online", 00:15:15.348 "raid_level": "raid5f", 00:15:15.348 "superblock": true, 00:15:15.348 "num_base_bdevs": 3, 00:15:15.348 "num_base_bdevs_discovered": 3, 00:15:15.348 "num_base_bdevs_operational": 3, 00:15:15.348 "base_bdevs_list": [ 00:15:15.348 { 00:15:15.348 "name": "spare", 00:15:15.348 "uuid": "47528ca9-9a15-5970-8e6c-e4d4979040ef", 00:15:15.348 "is_configured": true, 00:15:15.348 "data_offset": 2048, 00:15:15.348 "data_size": 63488 00:15:15.348 }, 00:15:15.348 { 00:15:15.348 "name": "BaseBdev2", 00:15:15.348 "uuid": "74115d42-9c4f-5682-b66c-0f3b09595570", 00:15:15.348 "is_configured": true, 00:15:15.348 "data_offset": 2048, 00:15:15.348 "data_size": 63488 00:15:15.348 }, 00:15:15.348 { 00:15:15.348 "name": "BaseBdev3", 00:15:15.348 "uuid": "bb1a4629-004e-590f-be20-dbff1e8af980", 00:15:15.348 "is_configured": true, 00:15:15.348 "data_offset": 2048, 00:15:15.348 "data_size": 63488 00:15:15.348 } 00:15:15.348 ] 00:15:15.348 }' 00:15:15.348 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.348 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:15.348 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.348 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:15.348 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:15.348 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.348 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.348 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.348 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.348 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.348 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:15.348 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.348 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.348 [2024-11-21 03:24:02.853714] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:15.348 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.348 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:15.348 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.348 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.348 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.348 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.348 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:15.348 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.348 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.348 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.348 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.348 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.348 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.348 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.348 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.348 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.608 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.608 "name": "raid_bdev1", 00:15:15.608 "uuid": "6be5d739-36d3-4916-8ddb-5c3d75f2cdf7", 00:15:15.608 "strip_size_kb": 64, 00:15:15.608 "state": "online", 00:15:15.608 "raid_level": "raid5f", 00:15:15.608 "superblock": true, 00:15:15.608 "num_base_bdevs": 3, 00:15:15.608 "num_base_bdevs_discovered": 2, 00:15:15.608 "num_base_bdevs_operational": 2, 00:15:15.608 "base_bdevs_list": [ 00:15:15.608 { 00:15:15.608 "name": null, 00:15:15.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.608 "is_configured": false, 00:15:15.608 "data_offset": 0, 00:15:15.608 "data_size": 63488 00:15:15.608 }, 00:15:15.608 { 00:15:15.608 "name": "BaseBdev2", 00:15:15.608 "uuid": "74115d42-9c4f-5682-b66c-0f3b09595570", 00:15:15.608 "is_configured": true, 00:15:15.608 "data_offset": 2048, 00:15:15.608 "data_size": 63488 00:15:15.608 }, 00:15:15.608 { 00:15:15.608 "name": "BaseBdev3", 00:15:15.608 "uuid": "bb1a4629-004e-590f-be20-dbff1e8af980", 00:15:15.608 "is_configured": true, 00:15:15.608 "data_offset": 2048, 00:15:15.608 "data_size": 63488 00:15:15.608 } 00:15:15.608 ] 00:15:15.608 }' 00:15:15.608 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.608 03:24:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.868 03:24:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:15.868 03:24:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.868 03:24:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.868 [2024-11-21 03:24:03.309872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:15.868 [2024-11-21 03:24:03.310105] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:15.868 [2024-11-21 03:24:03.310168] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:15.868 [2024-11-21 03:24:03.310224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:15.868 [2024-11-21 03:24:03.315704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047be0 00:15:15.868 03:24:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.868 03:24:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:15.868 [2024-11-21 03:24:03.317819] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:16.806 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.806 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.806 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.806 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.806 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.806 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.806 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.806 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.806 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.806 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.066 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.066 "name": "raid_bdev1", 00:15:17.066 "uuid": "6be5d739-36d3-4916-8ddb-5c3d75f2cdf7", 00:15:17.066 "strip_size_kb": 64, 00:15:17.066 "state": "online", 00:15:17.066 "raid_level": "raid5f", 00:15:17.066 "superblock": true, 00:15:17.066 "num_base_bdevs": 3, 00:15:17.066 "num_base_bdevs_discovered": 3, 00:15:17.066 "num_base_bdevs_operational": 3, 00:15:17.066 "process": { 00:15:17.066 "type": "rebuild", 00:15:17.066 "target": "spare", 00:15:17.066 "progress": { 00:15:17.066 "blocks": 20480, 00:15:17.066 "percent": 16 00:15:17.066 } 00:15:17.066 }, 00:15:17.066 "base_bdevs_list": [ 00:15:17.066 { 00:15:17.066 "name": "spare", 00:15:17.066 "uuid": "47528ca9-9a15-5970-8e6c-e4d4979040ef", 00:15:17.066 "is_configured": true, 00:15:17.066 "data_offset": 2048, 00:15:17.066 "data_size": 63488 00:15:17.066 }, 00:15:17.066 { 00:15:17.066 "name": "BaseBdev2", 00:15:17.066 "uuid": "74115d42-9c4f-5682-b66c-0f3b09595570", 00:15:17.066 "is_configured": true, 00:15:17.066 "data_offset": 2048, 00:15:17.066 "data_size": 63488 00:15:17.066 }, 00:15:17.066 { 00:15:17.066 "name": "BaseBdev3", 00:15:17.066 "uuid": "bb1a4629-004e-590f-be20-dbff1e8af980", 00:15:17.066 "is_configured": true, 00:15:17.066 "data_offset": 2048, 00:15:17.066 "data_size": 63488 00:15:17.066 } 00:15:17.066 ] 00:15:17.066 }' 00:15:17.066 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.066 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:17.066 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.066 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:17.066 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:17.066 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.066 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.066 [2024-11-21 03:24:04.459991] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:17.066 [2024-11-21 03:24:04.526506] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:17.066 [2024-11-21 03:24:04.526611] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.066 [2024-11-21 03:24:04.526646] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:17.066 [2024-11-21 03:24:04.526673] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:17.066 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.066 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:17.067 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.067 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.067 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.067 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.067 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:17.067 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.067 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.067 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.067 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.067 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.067 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.067 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.067 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.067 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.067 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.067 "name": "raid_bdev1", 00:15:17.067 "uuid": "6be5d739-36d3-4916-8ddb-5c3d75f2cdf7", 00:15:17.067 "strip_size_kb": 64, 00:15:17.067 "state": "online", 00:15:17.067 "raid_level": "raid5f", 00:15:17.067 "superblock": true, 00:15:17.067 "num_base_bdevs": 3, 00:15:17.067 "num_base_bdevs_discovered": 2, 00:15:17.067 "num_base_bdevs_operational": 2, 00:15:17.067 "base_bdevs_list": [ 00:15:17.067 { 00:15:17.067 "name": null, 00:15:17.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.067 "is_configured": false, 00:15:17.067 "data_offset": 0, 00:15:17.067 "data_size": 63488 00:15:17.067 }, 00:15:17.067 { 00:15:17.067 "name": "BaseBdev2", 00:15:17.067 "uuid": "74115d42-9c4f-5682-b66c-0f3b09595570", 00:15:17.067 "is_configured": true, 00:15:17.067 "data_offset": 2048, 00:15:17.067 "data_size": 63488 00:15:17.067 }, 00:15:17.067 { 00:15:17.067 "name": "BaseBdev3", 00:15:17.067 "uuid": "bb1a4629-004e-590f-be20-dbff1e8af980", 00:15:17.067 "is_configured": true, 00:15:17.067 "data_offset": 2048, 00:15:17.067 "data_size": 63488 00:15:17.067 } 00:15:17.067 ] 00:15:17.067 }' 00:15:17.067 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.067 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.636 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:17.636 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.636 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.636 [2024-11-21 03:24:04.980172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:17.636 [2024-11-21 03:24:04.980274] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.636 [2024-11-21 03:24:04.980310] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:15:17.636 [2024-11-21 03:24:04.980339] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.636 [2024-11-21 03:24:04.980802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.636 [2024-11-21 03:24:04.980864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:17.636 [2024-11-21 03:24:04.980948] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:17.636 [2024-11-21 03:24:04.980963] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:17.636 [2024-11-21 03:24:04.980972] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:17.636 [2024-11-21 03:24:04.980995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:17.636 [2024-11-21 03:24:04.985025] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047cb0 00:15:17.636 spare 00:15:17.636 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.636 03:24:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:17.636 [2024-11-21 03:24:04.987146] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:18.575 03:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:18.575 03:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.575 03:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:18.575 03:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:18.575 03:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.575 03:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.575 03:24:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.575 03:24:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.575 03:24:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.575 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.575 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.575 "name": "raid_bdev1", 00:15:18.575 "uuid": "6be5d739-36d3-4916-8ddb-5c3d75f2cdf7", 00:15:18.575 "strip_size_kb": 64, 00:15:18.575 "state": "online", 00:15:18.575 "raid_level": "raid5f", 00:15:18.575 "superblock": true, 00:15:18.575 "num_base_bdevs": 3, 00:15:18.575 "num_base_bdevs_discovered": 3, 00:15:18.575 "num_base_bdevs_operational": 3, 00:15:18.575 "process": { 00:15:18.575 "type": "rebuild", 00:15:18.575 "target": "spare", 00:15:18.575 "progress": { 00:15:18.575 "blocks": 20480, 00:15:18.575 "percent": 16 00:15:18.575 } 00:15:18.575 }, 00:15:18.575 "base_bdevs_list": [ 00:15:18.575 { 00:15:18.575 "name": "spare", 00:15:18.575 "uuid": "47528ca9-9a15-5970-8e6c-e4d4979040ef", 00:15:18.575 "is_configured": true, 00:15:18.575 "data_offset": 2048, 00:15:18.575 "data_size": 63488 00:15:18.575 }, 00:15:18.575 { 00:15:18.575 "name": "BaseBdev2", 00:15:18.575 "uuid": "74115d42-9c4f-5682-b66c-0f3b09595570", 00:15:18.575 "is_configured": true, 00:15:18.575 "data_offset": 2048, 00:15:18.575 "data_size": 63488 00:15:18.575 }, 00:15:18.575 { 00:15:18.575 "name": "BaseBdev3", 00:15:18.575 "uuid": "bb1a4629-004e-590f-be20-dbff1e8af980", 00:15:18.575 "is_configured": true, 00:15:18.575 "data_offset": 2048, 00:15:18.575 "data_size": 63488 00:15:18.575 } 00:15:18.575 ] 00:15:18.575 }' 00:15:18.575 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.575 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:18.575 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.835 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:18.836 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:18.836 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.836 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.836 [2024-11-21 03:24:06.153347] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:18.836 [2024-11-21 03:24:06.195801] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:18.836 [2024-11-21 03:24:06.195855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.836 [2024-11-21 03:24:06.195875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:18.836 [2024-11-21 03:24:06.195882] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:18.836 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.836 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:18.836 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.836 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.836 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:18.836 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.836 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:18.836 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.836 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.836 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.836 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.836 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.836 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.836 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.836 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.836 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.836 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.836 "name": "raid_bdev1", 00:15:18.836 "uuid": "6be5d739-36d3-4916-8ddb-5c3d75f2cdf7", 00:15:18.836 "strip_size_kb": 64, 00:15:18.836 "state": "online", 00:15:18.836 "raid_level": "raid5f", 00:15:18.836 "superblock": true, 00:15:18.836 "num_base_bdevs": 3, 00:15:18.836 "num_base_bdevs_discovered": 2, 00:15:18.836 "num_base_bdevs_operational": 2, 00:15:18.836 "base_bdevs_list": [ 00:15:18.836 { 00:15:18.836 "name": null, 00:15:18.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.836 "is_configured": false, 00:15:18.836 "data_offset": 0, 00:15:18.836 "data_size": 63488 00:15:18.836 }, 00:15:18.836 { 00:15:18.836 "name": "BaseBdev2", 00:15:18.836 "uuid": "74115d42-9c4f-5682-b66c-0f3b09595570", 00:15:18.836 "is_configured": true, 00:15:18.836 "data_offset": 2048, 00:15:18.836 "data_size": 63488 00:15:18.836 }, 00:15:18.836 { 00:15:18.836 "name": "BaseBdev3", 00:15:18.836 "uuid": "bb1a4629-004e-590f-be20-dbff1e8af980", 00:15:18.836 "is_configured": true, 00:15:18.836 "data_offset": 2048, 00:15:18.836 "data_size": 63488 00:15:18.836 } 00:15:18.836 ] 00:15:18.836 }' 00:15:18.836 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.836 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.095 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:19.095 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.095 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:19.095 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:19.095 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.095 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.095 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.095 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.095 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.352 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.352 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.352 "name": "raid_bdev1", 00:15:19.352 "uuid": "6be5d739-36d3-4916-8ddb-5c3d75f2cdf7", 00:15:19.352 "strip_size_kb": 64, 00:15:19.352 "state": "online", 00:15:19.352 "raid_level": "raid5f", 00:15:19.352 "superblock": true, 00:15:19.352 "num_base_bdevs": 3, 00:15:19.352 "num_base_bdevs_discovered": 2, 00:15:19.352 "num_base_bdevs_operational": 2, 00:15:19.352 "base_bdevs_list": [ 00:15:19.352 { 00:15:19.352 "name": null, 00:15:19.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.352 "is_configured": false, 00:15:19.352 "data_offset": 0, 00:15:19.352 "data_size": 63488 00:15:19.352 }, 00:15:19.352 { 00:15:19.352 "name": "BaseBdev2", 00:15:19.352 "uuid": "74115d42-9c4f-5682-b66c-0f3b09595570", 00:15:19.352 "is_configured": true, 00:15:19.352 "data_offset": 2048, 00:15:19.352 "data_size": 63488 00:15:19.352 }, 00:15:19.352 { 00:15:19.352 "name": "BaseBdev3", 00:15:19.352 "uuid": "bb1a4629-004e-590f-be20-dbff1e8af980", 00:15:19.352 "is_configured": true, 00:15:19.352 "data_offset": 2048, 00:15:19.352 "data_size": 63488 00:15:19.352 } 00:15:19.352 ] 00:15:19.352 }' 00:15:19.352 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.352 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:19.353 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.353 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:19.353 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:19.353 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.353 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.353 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.353 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:19.353 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.353 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.353 [2024-11-21 03:24:06.813432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:19.353 [2024-11-21 03:24:06.813481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.353 [2024-11-21 03:24:06.813501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:19.353 [2024-11-21 03:24:06.813509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.353 [2024-11-21 03:24:06.813914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.353 [2024-11-21 03:24:06.813930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:19.353 [2024-11-21 03:24:06.813995] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:19.353 [2024-11-21 03:24:06.814008] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:19.353 [2024-11-21 03:24:06.814034] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:19.353 [2024-11-21 03:24:06.814044] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:19.353 BaseBdev1 00:15:19.353 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.353 03:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:20.288 03:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:20.288 03:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.288 03:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.288 03:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.288 03:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.288 03:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:20.288 03:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.288 03:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.288 03:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.288 03:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.288 03:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.288 03:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.288 03:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.288 03:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.288 03:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.547 03:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.547 "name": "raid_bdev1", 00:15:20.547 "uuid": "6be5d739-36d3-4916-8ddb-5c3d75f2cdf7", 00:15:20.547 "strip_size_kb": 64, 00:15:20.547 "state": "online", 00:15:20.547 "raid_level": "raid5f", 00:15:20.547 "superblock": true, 00:15:20.547 "num_base_bdevs": 3, 00:15:20.547 "num_base_bdevs_discovered": 2, 00:15:20.547 "num_base_bdevs_operational": 2, 00:15:20.547 "base_bdevs_list": [ 00:15:20.547 { 00:15:20.547 "name": null, 00:15:20.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.547 "is_configured": false, 00:15:20.547 "data_offset": 0, 00:15:20.547 "data_size": 63488 00:15:20.547 }, 00:15:20.547 { 00:15:20.547 "name": "BaseBdev2", 00:15:20.547 "uuid": "74115d42-9c4f-5682-b66c-0f3b09595570", 00:15:20.547 "is_configured": true, 00:15:20.547 "data_offset": 2048, 00:15:20.547 "data_size": 63488 00:15:20.547 }, 00:15:20.547 { 00:15:20.547 "name": "BaseBdev3", 00:15:20.547 "uuid": "bb1a4629-004e-590f-be20-dbff1e8af980", 00:15:20.547 "is_configured": true, 00:15:20.547 "data_offset": 2048, 00:15:20.547 "data_size": 63488 00:15:20.547 } 00:15:20.547 ] 00:15:20.547 }' 00:15:20.547 03:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.547 03:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.806 03:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:20.806 03:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.806 03:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:20.806 03:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:20.806 03:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.806 03:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.806 03:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.806 03:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.806 03:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.806 03:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.806 03:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.806 "name": "raid_bdev1", 00:15:20.806 "uuid": "6be5d739-36d3-4916-8ddb-5c3d75f2cdf7", 00:15:20.806 "strip_size_kb": 64, 00:15:20.806 "state": "online", 00:15:20.806 "raid_level": "raid5f", 00:15:20.806 "superblock": true, 00:15:20.806 "num_base_bdevs": 3, 00:15:20.807 "num_base_bdevs_discovered": 2, 00:15:20.807 "num_base_bdevs_operational": 2, 00:15:20.807 "base_bdevs_list": [ 00:15:20.807 { 00:15:20.807 "name": null, 00:15:20.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.807 "is_configured": false, 00:15:20.807 "data_offset": 0, 00:15:20.807 "data_size": 63488 00:15:20.807 }, 00:15:20.807 { 00:15:20.807 "name": "BaseBdev2", 00:15:20.807 "uuid": "74115d42-9c4f-5682-b66c-0f3b09595570", 00:15:20.807 "is_configured": true, 00:15:20.807 "data_offset": 2048, 00:15:20.807 "data_size": 63488 00:15:20.807 }, 00:15:20.807 { 00:15:20.807 "name": "BaseBdev3", 00:15:20.807 "uuid": "bb1a4629-004e-590f-be20-dbff1e8af980", 00:15:20.807 "is_configured": true, 00:15:20.807 "data_offset": 2048, 00:15:20.807 "data_size": 63488 00:15:20.807 } 00:15:20.807 ] 00:15:20.807 }' 00:15:20.807 03:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.066 03:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:21.066 03:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.066 03:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:21.066 03:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:21.066 03:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:21.066 03:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:21.066 03:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:21.066 03:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:21.066 03:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:21.066 03:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:21.066 03:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:21.066 03:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.066 03:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.066 [2024-11-21 03:24:08.437887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:21.066 [2024-11-21 03:24:08.438083] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:21.066 [2024-11-21 03:24:08.438141] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:21.066 request: 00:15:21.066 { 00:15:21.066 "base_bdev": "BaseBdev1", 00:15:21.066 "raid_bdev": "raid_bdev1", 00:15:21.066 "method": "bdev_raid_add_base_bdev", 00:15:21.066 "req_id": 1 00:15:21.066 } 00:15:21.066 Got JSON-RPC error response 00:15:21.066 response: 00:15:21.066 { 00:15:21.066 "code": -22, 00:15:21.066 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:21.066 } 00:15:21.066 03:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:21.066 03:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:21.066 03:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:21.066 03:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:21.066 03:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:21.066 03:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:22.005 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:22.005 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.005 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.005 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.005 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.005 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:22.005 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.005 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.005 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.005 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.005 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.005 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.005 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.005 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.005 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.005 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.005 "name": "raid_bdev1", 00:15:22.005 "uuid": "6be5d739-36d3-4916-8ddb-5c3d75f2cdf7", 00:15:22.005 "strip_size_kb": 64, 00:15:22.005 "state": "online", 00:15:22.005 "raid_level": "raid5f", 00:15:22.005 "superblock": true, 00:15:22.005 "num_base_bdevs": 3, 00:15:22.005 "num_base_bdevs_discovered": 2, 00:15:22.005 "num_base_bdevs_operational": 2, 00:15:22.005 "base_bdevs_list": [ 00:15:22.005 { 00:15:22.005 "name": null, 00:15:22.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.005 "is_configured": false, 00:15:22.005 "data_offset": 0, 00:15:22.005 "data_size": 63488 00:15:22.005 }, 00:15:22.005 { 00:15:22.005 "name": "BaseBdev2", 00:15:22.005 "uuid": "74115d42-9c4f-5682-b66c-0f3b09595570", 00:15:22.005 "is_configured": true, 00:15:22.005 "data_offset": 2048, 00:15:22.005 "data_size": 63488 00:15:22.005 }, 00:15:22.005 { 00:15:22.005 "name": "BaseBdev3", 00:15:22.005 "uuid": "bb1a4629-004e-590f-be20-dbff1e8af980", 00:15:22.005 "is_configured": true, 00:15:22.005 "data_offset": 2048, 00:15:22.005 "data_size": 63488 00:15:22.005 } 00:15:22.005 ] 00:15:22.005 }' 00:15:22.005 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.005 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.574 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:22.574 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.574 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:22.574 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:22.574 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.574 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.574 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.574 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.574 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.574 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.574 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.574 "name": "raid_bdev1", 00:15:22.574 "uuid": "6be5d739-36d3-4916-8ddb-5c3d75f2cdf7", 00:15:22.574 "strip_size_kb": 64, 00:15:22.574 "state": "online", 00:15:22.574 "raid_level": "raid5f", 00:15:22.574 "superblock": true, 00:15:22.574 "num_base_bdevs": 3, 00:15:22.574 "num_base_bdevs_discovered": 2, 00:15:22.574 "num_base_bdevs_operational": 2, 00:15:22.574 "base_bdevs_list": [ 00:15:22.574 { 00:15:22.574 "name": null, 00:15:22.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.574 "is_configured": false, 00:15:22.574 "data_offset": 0, 00:15:22.574 "data_size": 63488 00:15:22.574 }, 00:15:22.574 { 00:15:22.574 "name": "BaseBdev2", 00:15:22.574 "uuid": "74115d42-9c4f-5682-b66c-0f3b09595570", 00:15:22.574 "is_configured": true, 00:15:22.574 "data_offset": 2048, 00:15:22.574 "data_size": 63488 00:15:22.574 }, 00:15:22.574 { 00:15:22.574 "name": "BaseBdev3", 00:15:22.574 "uuid": "bb1a4629-004e-590f-be20-dbff1e8af980", 00:15:22.574 "is_configured": true, 00:15:22.574 "data_offset": 2048, 00:15:22.574 "data_size": 63488 00:15:22.575 } 00:15:22.575 ] 00:15:22.575 }' 00:15:22.575 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.575 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:22.575 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.575 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:22.575 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 94560 00:15:22.575 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 94560 ']' 00:15:22.575 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 94560 00:15:22.575 03:24:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:22.575 03:24:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:22.575 03:24:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94560 00:15:22.575 03:24:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:22.575 killing process with pid 94560 00:15:22.575 Received shutdown signal, test time was about 60.000000 seconds 00:15:22.575 00:15:22.575 Latency(us) 00:15:22.575 [2024-11-21T03:24:10.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:22.575 [2024-11-21T03:24:10.141Z] =================================================================================================================== 00:15:22.575 [2024-11-21T03:24:10.141Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:22.575 03:24:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:22.575 03:24:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94560' 00:15:22.575 03:24:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 94560 00:15:22.575 [2024-11-21 03:24:10.042685] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:22.575 [2024-11-21 03:24:10.042796] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:22.575 [2024-11-21 03:24:10.042855] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:22.575 03:24:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 94560 00:15:22.575 [2024-11-21 03:24:10.042877] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:22.575 [2024-11-21 03:24:10.083970] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:22.835 03:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:22.835 00:15:22.835 real 0m21.678s 00:15:22.835 user 0m28.311s 00:15:22.835 sys 0m2.715s 00:15:22.835 03:24:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:22.835 ************************************ 00:15:22.835 END TEST raid5f_rebuild_test_sb 00:15:22.835 ************************************ 00:15:22.835 03:24:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.835 03:24:10 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:22.835 03:24:10 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:15:22.835 03:24:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:22.835 03:24:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:22.835 03:24:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:22.835 ************************************ 00:15:22.835 START TEST raid5f_state_function_test 00:15:22.835 ************************************ 00:15:22.835 03:24:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:15:22.835 03:24:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:22.835 03:24:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:22.835 03:24:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:22.835 03:24:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:22.835 03:24:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:22.835 03:24:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.835 03:24:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:22.835 03:24:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:22.835 03:24:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.835 03:24:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:22.835 03:24:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:22.835 03:24:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.835 03:24:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:22.835 03:24:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:22.836 03:24:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.836 03:24:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:22.836 03:24:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:22.836 03:24:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.836 03:24:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:22.836 03:24:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:22.836 03:24:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:22.836 03:24:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:22.836 03:24:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:22.836 03:24:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:22.836 03:24:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:22.836 03:24:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:22.836 03:24:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:22.836 03:24:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:22.836 03:24:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:22.836 03:24:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=95296 00:15:22.836 03:24:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:22.836 03:24:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 95296' 00:15:22.836 Process raid pid: 95296 00:15:22.836 03:24:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 95296 00:15:22.836 03:24:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 95296 ']' 00:15:22.836 03:24:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.836 03:24:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:22.836 03:24:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.836 03:24:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:22.836 03:24:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.096 [2024-11-21 03:24:10.462839] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:15:23.096 [2024-11-21 03:24:10.463084] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.096 [2024-11-21 03:24:10.600452] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:23.096 [2024-11-21 03:24:10.639890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.356 [2024-11-21 03:24:10.668343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.357 [2024-11-21 03:24:10.711648] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.357 [2024-11-21 03:24:10.711676] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.926 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:23.926 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:23.926 03:24:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:23.926 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.926 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.926 [2024-11-21 03:24:11.274191] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:23.926 [2024-11-21 03:24:11.274242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:23.926 [2024-11-21 03:24:11.274254] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:23.926 [2024-11-21 03:24:11.274262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:23.926 [2024-11-21 03:24:11.274272] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:23.926 [2024-11-21 03:24:11.274279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:23.926 [2024-11-21 03:24:11.274286] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:23.926 [2024-11-21 03:24:11.274293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:23.926 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.926 03:24:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:23.926 03:24:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.926 03:24:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.926 03:24:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.927 03:24:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.927 03:24:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:23.927 03:24:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.927 03:24:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.927 03:24:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.927 03:24:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.927 03:24:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.927 03:24:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.927 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.927 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.927 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.927 03:24:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.927 "name": "Existed_Raid", 00:15:23.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.927 "strip_size_kb": 64, 00:15:23.927 "state": "configuring", 00:15:23.927 "raid_level": "raid5f", 00:15:23.927 "superblock": false, 00:15:23.927 "num_base_bdevs": 4, 00:15:23.927 "num_base_bdevs_discovered": 0, 00:15:23.927 "num_base_bdevs_operational": 4, 00:15:23.927 "base_bdevs_list": [ 00:15:23.927 { 00:15:23.927 "name": "BaseBdev1", 00:15:23.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.927 "is_configured": false, 00:15:23.927 "data_offset": 0, 00:15:23.927 "data_size": 0 00:15:23.927 }, 00:15:23.927 { 00:15:23.927 "name": "BaseBdev2", 00:15:23.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.927 "is_configured": false, 00:15:23.927 "data_offset": 0, 00:15:23.927 "data_size": 0 00:15:23.927 }, 00:15:23.927 { 00:15:23.927 "name": "BaseBdev3", 00:15:23.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.927 "is_configured": false, 00:15:23.927 "data_offset": 0, 00:15:23.927 "data_size": 0 00:15:23.927 }, 00:15:23.927 { 00:15:23.927 "name": "BaseBdev4", 00:15:23.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.927 "is_configured": false, 00:15:23.927 "data_offset": 0, 00:15:23.927 "data_size": 0 00:15:23.927 } 00:15:23.927 ] 00:15:23.927 }' 00:15:23.927 03:24:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.927 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.497 03:24:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:24.497 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.497 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.497 [2024-11-21 03:24:11.762228] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:24.497 [2024-11-21 03:24:11.762303] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:15:24.497 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.497 03:24:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:24.497 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.497 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.497 [2024-11-21 03:24:11.774272] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:24.497 [2024-11-21 03:24:11.774351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:24.497 [2024-11-21 03:24:11.774393] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:24.497 [2024-11-21 03:24:11.774416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:24.497 [2024-11-21 03:24:11.774440] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:24.497 [2024-11-21 03:24:11.774462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:24.497 [2024-11-21 03:24:11.774495] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:24.497 [2024-11-21 03:24:11.774518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:24.497 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.497 03:24:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:24.497 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.497 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.497 [2024-11-21 03:24:11.795089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.497 BaseBdev1 00:15:24.497 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.497 03:24:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:24.497 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:24.497 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:24.497 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:24.497 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:24.497 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:24.497 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:24.497 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.497 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.497 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.497 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:24.497 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.497 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.497 [ 00:15:24.497 { 00:15:24.497 "name": "BaseBdev1", 00:15:24.497 "aliases": [ 00:15:24.497 "c6e37e32-ea3c-43eb-9000-0b02730fddf3" 00:15:24.497 ], 00:15:24.497 "product_name": "Malloc disk", 00:15:24.497 "block_size": 512, 00:15:24.497 "num_blocks": 65536, 00:15:24.497 "uuid": "c6e37e32-ea3c-43eb-9000-0b02730fddf3", 00:15:24.497 "assigned_rate_limits": { 00:15:24.497 "rw_ios_per_sec": 0, 00:15:24.497 "rw_mbytes_per_sec": 0, 00:15:24.497 "r_mbytes_per_sec": 0, 00:15:24.497 "w_mbytes_per_sec": 0 00:15:24.497 }, 00:15:24.497 "claimed": true, 00:15:24.497 "claim_type": "exclusive_write", 00:15:24.497 "zoned": false, 00:15:24.497 "supported_io_types": { 00:15:24.497 "read": true, 00:15:24.497 "write": true, 00:15:24.497 "unmap": true, 00:15:24.497 "flush": true, 00:15:24.497 "reset": true, 00:15:24.497 "nvme_admin": false, 00:15:24.497 "nvme_io": false, 00:15:24.497 "nvme_io_md": false, 00:15:24.497 "write_zeroes": true, 00:15:24.497 "zcopy": true, 00:15:24.497 "get_zone_info": false, 00:15:24.497 "zone_management": false, 00:15:24.497 "zone_append": false, 00:15:24.497 "compare": false, 00:15:24.497 "compare_and_write": false, 00:15:24.497 "abort": true, 00:15:24.497 "seek_hole": false, 00:15:24.497 "seek_data": false, 00:15:24.497 "copy": true, 00:15:24.497 "nvme_iov_md": false 00:15:24.497 }, 00:15:24.497 "memory_domains": [ 00:15:24.497 { 00:15:24.497 "dma_device_id": "system", 00:15:24.497 "dma_device_type": 1 00:15:24.497 }, 00:15:24.497 { 00:15:24.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.497 "dma_device_type": 2 00:15:24.497 } 00:15:24.497 ], 00:15:24.497 "driver_specific": {} 00:15:24.498 } 00:15:24.498 ] 00:15:24.498 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.498 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:24.498 03:24:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:24.498 03:24:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.498 03:24:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.498 03:24:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.498 03:24:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.498 03:24:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:24.498 03:24:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.498 03:24:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.498 03:24:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.498 03:24:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.498 03:24:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.498 03:24:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.498 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.498 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.498 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.498 03:24:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.498 "name": "Existed_Raid", 00:15:24.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.498 "strip_size_kb": 64, 00:15:24.498 "state": "configuring", 00:15:24.498 "raid_level": "raid5f", 00:15:24.498 "superblock": false, 00:15:24.498 "num_base_bdevs": 4, 00:15:24.498 "num_base_bdevs_discovered": 1, 00:15:24.498 "num_base_bdevs_operational": 4, 00:15:24.498 "base_bdevs_list": [ 00:15:24.498 { 00:15:24.498 "name": "BaseBdev1", 00:15:24.498 "uuid": "c6e37e32-ea3c-43eb-9000-0b02730fddf3", 00:15:24.498 "is_configured": true, 00:15:24.498 "data_offset": 0, 00:15:24.498 "data_size": 65536 00:15:24.498 }, 00:15:24.498 { 00:15:24.498 "name": "BaseBdev2", 00:15:24.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.498 "is_configured": false, 00:15:24.498 "data_offset": 0, 00:15:24.498 "data_size": 0 00:15:24.498 }, 00:15:24.498 { 00:15:24.498 "name": "BaseBdev3", 00:15:24.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.498 "is_configured": false, 00:15:24.498 "data_offset": 0, 00:15:24.498 "data_size": 0 00:15:24.498 }, 00:15:24.498 { 00:15:24.498 "name": "BaseBdev4", 00:15:24.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.498 "is_configured": false, 00:15:24.498 "data_offset": 0, 00:15:24.498 "data_size": 0 00:15:24.498 } 00:15:24.498 ] 00:15:24.498 }' 00:15:24.498 03:24:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.498 03:24:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.758 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:24.758 03:24:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.758 03:24:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.758 [2024-11-21 03:24:12.303245] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:24.758 [2024-11-21 03:24:12.303290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:24.758 03:24:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.758 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:24.758 03:24:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.758 03:24:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.758 [2024-11-21 03:24:12.315293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.758 [2024-11-21 03:24:12.317148] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:24.758 [2024-11-21 03:24:12.317221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:24.758 [2024-11-21 03:24:12.317250] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:24.758 [2024-11-21 03:24:12.317270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:24.758 [2024-11-21 03:24:12.317289] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:24.758 [2024-11-21 03:24:12.317308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:24.758 03:24:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.758 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:24.758 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:24.758 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:24.758 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.019 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.019 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.019 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.019 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:25.019 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.019 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.019 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.019 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.019 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.019 03:24:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.019 03:24:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.019 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.019 03:24:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.019 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.019 "name": "Existed_Raid", 00:15:25.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.019 "strip_size_kb": 64, 00:15:25.019 "state": "configuring", 00:15:25.019 "raid_level": "raid5f", 00:15:25.019 "superblock": false, 00:15:25.019 "num_base_bdevs": 4, 00:15:25.019 "num_base_bdevs_discovered": 1, 00:15:25.019 "num_base_bdevs_operational": 4, 00:15:25.019 "base_bdevs_list": [ 00:15:25.019 { 00:15:25.019 "name": "BaseBdev1", 00:15:25.019 "uuid": "c6e37e32-ea3c-43eb-9000-0b02730fddf3", 00:15:25.019 "is_configured": true, 00:15:25.019 "data_offset": 0, 00:15:25.019 "data_size": 65536 00:15:25.019 }, 00:15:25.019 { 00:15:25.019 "name": "BaseBdev2", 00:15:25.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.019 "is_configured": false, 00:15:25.019 "data_offset": 0, 00:15:25.019 "data_size": 0 00:15:25.019 }, 00:15:25.019 { 00:15:25.019 "name": "BaseBdev3", 00:15:25.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.019 "is_configured": false, 00:15:25.019 "data_offset": 0, 00:15:25.019 "data_size": 0 00:15:25.019 }, 00:15:25.019 { 00:15:25.019 "name": "BaseBdev4", 00:15:25.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.019 "is_configured": false, 00:15:25.019 "data_offset": 0, 00:15:25.019 "data_size": 0 00:15:25.019 } 00:15:25.019 ] 00:15:25.019 }' 00:15:25.019 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.019 03:24:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.279 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:25.279 03:24:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.279 03:24:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.279 [2024-11-21 03:24:12.802519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:25.279 BaseBdev2 00:15:25.279 03:24:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.279 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:25.279 03:24:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:25.279 03:24:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:25.279 03:24:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:25.279 03:24:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:25.279 03:24:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:25.279 03:24:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:25.279 03:24:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.279 03:24:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.279 03:24:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.279 03:24:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:25.279 03:24:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.279 03:24:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.279 [ 00:15:25.279 { 00:15:25.279 "name": "BaseBdev2", 00:15:25.279 "aliases": [ 00:15:25.279 "70faa97b-5350-4fdb-8123-978fe86287b1" 00:15:25.279 ], 00:15:25.279 "product_name": "Malloc disk", 00:15:25.279 "block_size": 512, 00:15:25.279 "num_blocks": 65536, 00:15:25.279 "uuid": "70faa97b-5350-4fdb-8123-978fe86287b1", 00:15:25.279 "assigned_rate_limits": { 00:15:25.279 "rw_ios_per_sec": 0, 00:15:25.279 "rw_mbytes_per_sec": 0, 00:15:25.279 "r_mbytes_per_sec": 0, 00:15:25.279 "w_mbytes_per_sec": 0 00:15:25.279 }, 00:15:25.279 "claimed": true, 00:15:25.279 "claim_type": "exclusive_write", 00:15:25.279 "zoned": false, 00:15:25.279 "supported_io_types": { 00:15:25.279 "read": true, 00:15:25.279 "write": true, 00:15:25.279 "unmap": true, 00:15:25.279 "flush": true, 00:15:25.279 "reset": true, 00:15:25.279 "nvme_admin": false, 00:15:25.279 "nvme_io": false, 00:15:25.279 "nvme_io_md": false, 00:15:25.279 "write_zeroes": true, 00:15:25.279 "zcopy": true, 00:15:25.279 "get_zone_info": false, 00:15:25.279 "zone_management": false, 00:15:25.279 "zone_append": false, 00:15:25.279 "compare": false, 00:15:25.279 "compare_and_write": false, 00:15:25.279 "abort": true, 00:15:25.279 "seek_hole": false, 00:15:25.279 "seek_data": false, 00:15:25.279 "copy": true, 00:15:25.279 "nvme_iov_md": false 00:15:25.279 }, 00:15:25.279 "memory_domains": [ 00:15:25.279 { 00:15:25.279 "dma_device_id": "system", 00:15:25.279 "dma_device_type": 1 00:15:25.279 }, 00:15:25.279 { 00:15:25.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.539 "dma_device_type": 2 00:15:25.539 } 00:15:25.539 ], 00:15:25.539 "driver_specific": {} 00:15:25.539 } 00:15:25.539 ] 00:15:25.539 03:24:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.539 03:24:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:25.539 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:25.539 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:25.539 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:25.539 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.539 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.539 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.539 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.539 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:25.539 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.539 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.539 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.539 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.539 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.539 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.539 03:24:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.539 03:24:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.539 03:24:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.539 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.539 "name": "Existed_Raid", 00:15:25.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.539 "strip_size_kb": 64, 00:15:25.539 "state": "configuring", 00:15:25.539 "raid_level": "raid5f", 00:15:25.539 "superblock": false, 00:15:25.539 "num_base_bdevs": 4, 00:15:25.539 "num_base_bdevs_discovered": 2, 00:15:25.539 "num_base_bdevs_operational": 4, 00:15:25.539 "base_bdevs_list": [ 00:15:25.539 { 00:15:25.539 "name": "BaseBdev1", 00:15:25.539 "uuid": "c6e37e32-ea3c-43eb-9000-0b02730fddf3", 00:15:25.539 "is_configured": true, 00:15:25.539 "data_offset": 0, 00:15:25.539 "data_size": 65536 00:15:25.539 }, 00:15:25.539 { 00:15:25.539 "name": "BaseBdev2", 00:15:25.539 "uuid": "70faa97b-5350-4fdb-8123-978fe86287b1", 00:15:25.539 "is_configured": true, 00:15:25.539 "data_offset": 0, 00:15:25.539 "data_size": 65536 00:15:25.539 }, 00:15:25.539 { 00:15:25.539 "name": "BaseBdev3", 00:15:25.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.539 "is_configured": false, 00:15:25.539 "data_offset": 0, 00:15:25.539 "data_size": 0 00:15:25.539 }, 00:15:25.539 { 00:15:25.539 "name": "BaseBdev4", 00:15:25.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.539 "is_configured": false, 00:15:25.539 "data_offset": 0, 00:15:25.539 "data_size": 0 00:15:25.539 } 00:15:25.539 ] 00:15:25.539 }' 00:15:25.539 03:24:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.539 03:24:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.800 [2024-11-21 03:24:13.297990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:25.800 BaseBdev3 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.800 [ 00:15:25.800 { 00:15:25.800 "name": "BaseBdev3", 00:15:25.800 "aliases": [ 00:15:25.800 "ee64daef-377e-4579-a107-3f9a9becfc59" 00:15:25.800 ], 00:15:25.800 "product_name": "Malloc disk", 00:15:25.800 "block_size": 512, 00:15:25.800 "num_blocks": 65536, 00:15:25.800 "uuid": "ee64daef-377e-4579-a107-3f9a9becfc59", 00:15:25.800 "assigned_rate_limits": { 00:15:25.800 "rw_ios_per_sec": 0, 00:15:25.800 "rw_mbytes_per_sec": 0, 00:15:25.800 "r_mbytes_per_sec": 0, 00:15:25.800 "w_mbytes_per_sec": 0 00:15:25.800 }, 00:15:25.800 "claimed": true, 00:15:25.800 "claim_type": "exclusive_write", 00:15:25.800 "zoned": false, 00:15:25.800 "supported_io_types": { 00:15:25.800 "read": true, 00:15:25.800 "write": true, 00:15:25.800 "unmap": true, 00:15:25.800 "flush": true, 00:15:25.800 "reset": true, 00:15:25.800 "nvme_admin": false, 00:15:25.800 "nvme_io": false, 00:15:25.800 "nvme_io_md": false, 00:15:25.800 "write_zeroes": true, 00:15:25.800 "zcopy": true, 00:15:25.800 "get_zone_info": false, 00:15:25.800 "zone_management": false, 00:15:25.800 "zone_append": false, 00:15:25.800 "compare": false, 00:15:25.800 "compare_and_write": false, 00:15:25.800 "abort": true, 00:15:25.800 "seek_hole": false, 00:15:25.800 "seek_data": false, 00:15:25.800 "copy": true, 00:15:25.800 "nvme_iov_md": false 00:15:25.800 }, 00:15:25.800 "memory_domains": [ 00:15:25.800 { 00:15:25.800 "dma_device_id": "system", 00:15:25.800 "dma_device_type": 1 00:15:25.800 }, 00:15:25.800 { 00:15:25.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.800 "dma_device_type": 2 00:15:25.800 } 00:15:25.800 ], 00:15:25.800 "driver_specific": {} 00:15:25.800 } 00:15:25.800 ] 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.800 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.060 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.060 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.060 "name": "Existed_Raid", 00:15:26.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.060 "strip_size_kb": 64, 00:15:26.060 "state": "configuring", 00:15:26.060 "raid_level": "raid5f", 00:15:26.060 "superblock": false, 00:15:26.060 "num_base_bdevs": 4, 00:15:26.060 "num_base_bdevs_discovered": 3, 00:15:26.060 "num_base_bdevs_operational": 4, 00:15:26.060 "base_bdevs_list": [ 00:15:26.060 { 00:15:26.060 "name": "BaseBdev1", 00:15:26.060 "uuid": "c6e37e32-ea3c-43eb-9000-0b02730fddf3", 00:15:26.060 "is_configured": true, 00:15:26.060 "data_offset": 0, 00:15:26.060 "data_size": 65536 00:15:26.060 }, 00:15:26.060 { 00:15:26.060 "name": "BaseBdev2", 00:15:26.060 "uuid": "70faa97b-5350-4fdb-8123-978fe86287b1", 00:15:26.060 "is_configured": true, 00:15:26.060 "data_offset": 0, 00:15:26.060 "data_size": 65536 00:15:26.060 }, 00:15:26.060 { 00:15:26.060 "name": "BaseBdev3", 00:15:26.060 "uuid": "ee64daef-377e-4579-a107-3f9a9becfc59", 00:15:26.060 "is_configured": true, 00:15:26.060 "data_offset": 0, 00:15:26.060 "data_size": 65536 00:15:26.060 }, 00:15:26.060 { 00:15:26.060 "name": "BaseBdev4", 00:15:26.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.060 "is_configured": false, 00:15:26.060 "data_offset": 0, 00:15:26.060 "data_size": 0 00:15:26.060 } 00:15:26.060 ] 00:15:26.060 }' 00:15:26.060 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.060 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.320 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:26.320 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.320 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.320 [2024-11-21 03:24:13.785156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:26.320 [2024-11-21 03:24:13.785213] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:26.320 [2024-11-21 03:24:13.785223] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:26.320 [2024-11-21 03:24:13.785498] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:26.320 [2024-11-21 03:24:13.785942] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:26.320 [2024-11-21 03:24:13.785953] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:15:26.320 [2024-11-21 03:24:13.786174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.320 BaseBdev4 00:15:26.320 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.320 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:26.320 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:26.320 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:26.320 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:26.320 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:26.320 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:26.320 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:26.320 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.320 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.320 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.321 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:26.321 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.321 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.321 [ 00:15:26.321 { 00:15:26.321 "name": "BaseBdev4", 00:15:26.321 "aliases": [ 00:15:26.321 "31ac5ca1-57f3-4aa0-ab07-dcb0781c6f33" 00:15:26.321 ], 00:15:26.321 "product_name": "Malloc disk", 00:15:26.321 "block_size": 512, 00:15:26.321 "num_blocks": 65536, 00:15:26.321 "uuid": "31ac5ca1-57f3-4aa0-ab07-dcb0781c6f33", 00:15:26.321 "assigned_rate_limits": { 00:15:26.321 "rw_ios_per_sec": 0, 00:15:26.321 "rw_mbytes_per_sec": 0, 00:15:26.321 "r_mbytes_per_sec": 0, 00:15:26.321 "w_mbytes_per_sec": 0 00:15:26.321 }, 00:15:26.321 "claimed": true, 00:15:26.321 "claim_type": "exclusive_write", 00:15:26.321 "zoned": false, 00:15:26.321 "supported_io_types": { 00:15:26.321 "read": true, 00:15:26.321 "write": true, 00:15:26.321 "unmap": true, 00:15:26.321 "flush": true, 00:15:26.321 "reset": true, 00:15:26.321 "nvme_admin": false, 00:15:26.321 "nvme_io": false, 00:15:26.321 "nvme_io_md": false, 00:15:26.321 "write_zeroes": true, 00:15:26.321 "zcopy": true, 00:15:26.321 "get_zone_info": false, 00:15:26.321 "zone_management": false, 00:15:26.321 "zone_append": false, 00:15:26.321 "compare": false, 00:15:26.321 "compare_and_write": false, 00:15:26.321 "abort": true, 00:15:26.321 "seek_hole": false, 00:15:26.321 "seek_data": false, 00:15:26.321 "copy": true, 00:15:26.321 "nvme_iov_md": false 00:15:26.321 }, 00:15:26.321 "memory_domains": [ 00:15:26.321 { 00:15:26.321 "dma_device_id": "system", 00:15:26.321 "dma_device_type": 1 00:15:26.321 }, 00:15:26.321 { 00:15:26.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.321 "dma_device_type": 2 00:15:26.321 } 00:15:26.321 ], 00:15:26.321 "driver_specific": {} 00:15:26.321 } 00:15:26.321 ] 00:15:26.321 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.321 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:26.321 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:26.321 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:26.321 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:26.321 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.321 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.321 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.321 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.321 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:26.321 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.321 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.321 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.321 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.321 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.321 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.321 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.321 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.321 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.581 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.581 "name": "Existed_Raid", 00:15:26.581 "uuid": "0cf85fbc-d630-4b57-86a6-a4727f00ac11", 00:15:26.581 "strip_size_kb": 64, 00:15:26.581 "state": "online", 00:15:26.581 "raid_level": "raid5f", 00:15:26.581 "superblock": false, 00:15:26.581 "num_base_bdevs": 4, 00:15:26.581 "num_base_bdevs_discovered": 4, 00:15:26.581 "num_base_bdevs_operational": 4, 00:15:26.581 "base_bdevs_list": [ 00:15:26.581 { 00:15:26.581 "name": "BaseBdev1", 00:15:26.581 "uuid": "c6e37e32-ea3c-43eb-9000-0b02730fddf3", 00:15:26.581 "is_configured": true, 00:15:26.581 "data_offset": 0, 00:15:26.581 "data_size": 65536 00:15:26.581 }, 00:15:26.581 { 00:15:26.581 "name": "BaseBdev2", 00:15:26.581 "uuid": "70faa97b-5350-4fdb-8123-978fe86287b1", 00:15:26.581 "is_configured": true, 00:15:26.581 "data_offset": 0, 00:15:26.581 "data_size": 65536 00:15:26.581 }, 00:15:26.581 { 00:15:26.581 "name": "BaseBdev3", 00:15:26.581 "uuid": "ee64daef-377e-4579-a107-3f9a9becfc59", 00:15:26.581 "is_configured": true, 00:15:26.581 "data_offset": 0, 00:15:26.581 "data_size": 65536 00:15:26.581 }, 00:15:26.581 { 00:15:26.581 "name": "BaseBdev4", 00:15:26.581 "uuid": "31ac5ca1-57f3-4aa0-ab07-dcb0781c6f33", 00:15:26.581 "is_configured": true, 00:15:26.581 "data_offset": 0, 00:15:26.581 "data_size": 65536 00:15:26.581 } 00:15:26.581 ] 00:15:26.581 }' 00:15:26.581 03:24:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.581 03:24:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.841 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:26.841 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:26.841 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:26.841 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:26.841 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:26.841 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:26.841 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:26.841 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:26.841 03:24:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.841 03:24:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.841 [2024-11-21 03:24:14.301504] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:26.841 03:24:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.841 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:26.841 "name": "Existed_Raid", 00:15:26.841 "aliases": [ 00:15:26.841 "0cf85fbc-d630-4b57-86a6-a4727f00ac11" 00:15:26.841 ], 00:15:26.841 "product_name": "Raid Volume", 00:15:26.841 "block_size": 512, 00:15:26.841 "num_blocks": 196608, 00:15:26.841 "uuid": "0cf85fbc-d630-4b57-86a6-a4727f00ac11", 00:15:26.841 "assigned_rate_limits": { 00:15:26.841 "rw_ios_per_sec": 0, 00:15:26.841 "rw_mbytes_per_sec": 0, 00:15:26.841 "r_mbytes_per_sec": 0, 00:15:26.841 "w_mbytes_per_sec": 0 00:15:26.841 }, 00:15:26.841 "claimed": false, 00:15:26.841 "zoned": false, 00:15:26.841 "supported_io_types": { 00:15:26.841 "read": true, 00:15:26.841 "write": true, 00:15:26.841 "unmap": false, 00:15:26.841 "flush": false, 00:15:26.841 "reset": true, 00:15:26.841 "nvme_admin": false, 00:15:26.841 "nvme_io": false, 00:15:26.841 "nvme_io_md": false, 00:15:26.841 "write_zeroes": true, 00:15:26.841 "zcopy": false, 00:15:26.841 "get_zone_info": false, 00:15:26.841 "zone_management": false, 00:15:26.841 "zone_append": false, 00:15:26.841 "compare": false, 00:15:26.841 "compare_and_write": false, 00:15:26.841 "abort": false, 00:15:26.841 "seek_hole": false, 00:15:26.841 "seek_data": false, 00:15:26.841 "copy": false, 00:15:26.841 "nvme_iov_md": false 00:15:26.841 }, 00:15:26.842 "driver_specific": { 00:15:26.842 "raid": { 00:15:26.842 "uuid": "0cf85fbc-d630-4b57-86a6-a4727f00ac11", 00:15:26.842 "strip_size_kb": 64, 00:15:26.842 "state": "online", 00:15:26.842 "raid_level": "raid5f", 00:15:26.842 "superblock": false, 00:15:26.842 "num_base_bdevs": 4, 00:15:26.842 "num_base_bdevs_discovered": 4, 00:15:26.842 "num_base_bdevs_operational": 4, 00:15:26.842 "base_bdevs_list": [ 00:15:26.842 { 00:15:26.842 "name": "BaseBdev1", 00:15:26.842 "uuid": "c6e37e32-ea3c-43eb-9000-0b02730fddf3", 00:15:26.842 "is_configured": true, 00:15:26.842 "data_offset": 0, 00:15:26.842 "data_size": 65536 00:15:26.842 }, 00:15:26.842 { 00:15:26.842 "name": "BaseBdev2", 00:15:26.842 "uuid": "70faa97b-5350-4fdb-8123-978fe86287b1", 00:15:26.842 "is_configured": true, 00:15:26.842 "data_offset": 0, 00:15:26.842 "data_size": 65536 00:15:26.842 }, 00:15:26.842 { 00:15:26.842 "name": "BaseBdev3", 00:15:26.842 "uuid": "ee64daef-377e-4579-a107-3f9a9becfc59", 00:15:26.842 "is_configured": true, 00:15:26.842 "data_offset": 0, 00:15:26.842 "data_size": 65536 00:15:26.842 }, 00:15:26.842 { 00:15:26.842 "name": "BaseBdev4", 00:15:26.842 "uuid": "31ac5ca1-57f3-4aa0-ab07-dcb0781c6f33", 00:15:26.842 "is_configured": true, 00:15:26.842 "data_offset": 0, 00:15:26.842 "data_size": 65536 00:15:26.842 } 00:15:26.842 ] 00:15:26.842 } 00:15:26.842 } 00:15:26.842 }' 00:15:26.842 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:26.842 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:26.842 BaseBdev2 00:15:26.842 BaseBdev3 00:15:26.842 BaseBdev4' 00:15:26.842 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.102 [2024-11-21 03:24:14.617514] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.102 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.103 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.103 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.103 03:24:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.103 03:24:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.103 03:24:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.362 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.362 "name": "Existed_Raid", 00:15:27.362 "uuid": "0cf85fbc-d630-4b57-86a6-a4727f00ac11", 00:15:27.362 "strip_size_kb": 64, 00:15:27.362 "state": "online", 00:15:27.362 "raid_level": "raid5f", 00:15:27.362 "superblock": false, 00:15:27.362 "num_base_bdevs": 4, 00:15:27.362 "num_base_bdevs_discovered": 3, 00:15:27.362 "num_base_bdevs_operational": 3, 00:15:27.362 "base_bdevs_list": [ 00:15:27.362 { 00:15:27.362 "name": null, 00:15:27.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.362 "is_configured": false, 00:15:27.362 "data_offset": 0, 00:15:27.362 "data_size": 65536 00:15:27.362 }, 00:15:27.362 { 00:15:27.362 "name": "BaseBdev2", 00:15:27.362 "uuid": "70faa97b-5350-4fdb-8123-978fe86287b1", 00:15:27.362 "is_configured": true, 00:15:27.362 "data_offset": 0, 00:15:27.362 "data_size": 65536 00:15:27.362 }, 00:15:27.362 { 00:15:27.362 "name": "BaseBdev3", 00:15:27.362 "uuid": "ee64daef-377e-4579-a107-3f9a9becfc59", 00:15:27.362 "is_configured": true, 00:15:27.362 "data_offset": 0, 00:15:27.362 "data_size": 65536 00:15:27.362 }, 00:15:27.362 { 00:15:27.362 "name": "BaseBdev4", 00:15:27.362 "uuid": "31ac5ca1-57f3-4aa0-ab07-dcb0781c6f33", 00:15:27.362 "is_configured": true, 00:15:27.362 "data_offset": 0, 00:15:27.362 "data_size": 65536 00:15:27.362 } 00:15:27.363 ] 00:15:27.363 }' 00:15:27.363 03:24:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.363 03:24:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.622 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:27.622 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:27.623 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:27.623 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.623 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.623 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.623 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.623 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:27.623 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:27.623 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:27.623 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.623 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.623 [2024-11-21 03:24:15.080857] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:27.623 [2024-11-21 03:24:15.081016] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:27.623 [2024-11-21 03:24:15.092435] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:27.623 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.623 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:27.623 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:27.623 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.623 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:27.623 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.623 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.623 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.623 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:27.623 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:27.623 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:27.623 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.623 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.623 [2024-11-21 03:24:15.148466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:27.623 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.623 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:27.623 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:27.623 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.623 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.623 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.623 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:27.623 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.884 [2024-11-21 03:24:15.223569] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:27.884 [2024-11-21 03:24:15.223613] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.884 BaseBdev2 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.884 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.884 [ 00:15:27.884 { 00:15:27.884 "name": "BaseBdev2", 00:15:27.884 "aliases": [ 00:15:27.885 "920b6fdf-c25a-467d-a474-d24ff6cd5f6a" 00:15:27.885 ], 00:15:27.885 "product_name": "Malloc disk", 00:15:27.885 "block_size": 512, 00:15:27.885 "num_blocks": 65536, 00:15:27.885 "uuid": "920b6fdf-c25a-467d-a474-d24ff6cd5f6a", 00:15:27.885 "assigned_rate_limits": { 00:15:27.885 "rw_ios_per_sec": 0, 00:15:27.885 "rw_mbytes_per_sec": 0, 00:15:27.885 "r_mbytes_per_sec": 0, 00:15:27.885 "w_mbytes_per_sec": 0 00:15:27.885 }, 00:15:27.885 "claimed": false, 00:15:27.885 "zoned": false, 00:15:27.885 "supported_io_types": { 00:15:27.885 "read": true, 00:15:27.885 "write": true, 00:15:27.885 "unmap": true, 00:15:27.885 "flush": true, 00:15:27.885 "reset": true, 00:15:27.885 "nvme_admin": false, 00:15:27.885 "nvme_io": false, 00:15:27.885 "nvme_io_md": false, 00:15:27.885 "write_zeroes": true, 00:15:27.885 "zcopy": true, 00:15:27.885 "get_zone_info": false, 00:15:27.885 "zone_management": false, 00:15:27.885 "zone_append": false, 00:15:27.885 "compare": false, 00:15:27.885 "compare_and_write": false, 00:15:27.885 "abort": true, 00:15:27.885 "seek_hole": false, 00:15:27.885 "seek_data": false, 00:15:27.885 "copy": true, 00:15:27.885 "nvme_iov_md": false 00:15:27.885 }, 00:15:27.885 "memory_domains": [ 00:15:27.885 { 00:15:27.885 "dma_device_id": "system", 00:15:27.885 "dma_device_type": 1 00:15:27.885 }, 00:15:27.885 { 00:15:27.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.885 "dma_device_type": 2 00:15:27.885 } 00:15:27.885 ], 00:15:27.885 "driver_specific": {} 00:15:27.885 } 00:15:27.885 ] 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.885 BaseBdev3 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.885 [ 00:15:27.885 { 00:15:27.885 "name": "BaseBdev3", 00:15:27.885 "aliases": [ 00:15:27.885 "243bb356-b5f4-44dd-8a3c-c918730479ac" 00:15:27.885 ], 00:15:27.885 "product_name": "Malloc disk", 00:15:27.885 "block_size": 512, 00:15:27.885 "num_blocks": 65536, 00:15:27.885 "uuid": "243bb356-b5f4-44dd-8a3c-c918730479ac", 00:15:27.885 "assigned_rate_limits": { 00:15:27.885 "rw_ios_per_sec": 0, 00:15:27.885 "rw_mbytes_per_sec": 0, 00:15:27.885 "r_mbytes_per_sec": 0, 00:15:27.885 "w_mbytes_per_sec": 0 00:15:27.885 }, 00:15:27.885 "claimed": false, 00:15:27.885 "zoned": false, 00:15:27.885 "supported_io_types": { 00:15:27.885 "read": true, 00:15:27.885 "write": true, 00:15:27.885 "unmap": true, 00:15:27.885 "flush": true, 00:15:27.885 "reset": true, 00:15:27.885 "nvme_admin": false, 00:15:27.885 "nvme_io": false, 00:15:27.885 "nvme_io_md": false, 00:15:27.885 "write_zeroes": true, 00:15:27.885 "zcopy": true, 00:15:27.885 "get_zone_info": false, 00:15:27.885 "zone_management": false, 00:15:27.885 "zone_append": false, 00:15:27.885 "compare": false, 00:15:27.885 "compare_and_write": false, 00:15:27.885 "abort": true, 00:15:27.885 "seek_hole": false, 00:15:27.885 "seek_data": false, 00:15:27.885 "copy": true, 00:15:27.885 "nvme_iov_md": false 00:15:27.885 }, 00:15:27.885 "memory_domains": [ 00:15:27.885 { 00:15:27.885 "dma_device_id": "system", 00:15:27.885 "dma_device_type": 1 00:15:27.885 }, 00:15:27.885 { 00:15:27.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.885 "dma_device_type": 2 00:15:27.885 } 00:15:27.885 ], 00:15:27.885 "driver_specific": {} 00:15:27.885 } 00:15:27.885 ] 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.885 BaseBdev4 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.885 [ 00:15:27.885 { 00:15:27.885 "name": "BaseBdev4", 00:15:27.885 "aliases": [ 00:15:27.885 "b9b36cf0-6156-4d46-a88b-333dd7537df6" 00:15:27.885 ], 00:15:27.885 "product_name": "Malloc disk", 00:15:27.885 "block_size": 512, 00:15:27.885 "num_blocks": 65536, 00:15:27.885 "uuid": "b9b36cf0-6156-4d46-a88b-333dd7537df6", 00:15:27.885 "assigned_rate_limits": { 00:15:27.885 "rw_ios_per_sec": 0, 00:15:27.885 "rw_mbytes_per_sec": 0, 00:15:27.885 "r_mbytes_per_sec": 0, 00:15:27.885 "w_mbytes_per_sec": 0 00:15:27.885 }, 00:15:27.885 "claimed": false, 00:15:27.885 "zoned": false, 00:15:27.885 "supported_io_types": { 00:15:27.885 "read": true, 00:15:27.885 "write": true, 00:15:27.885 "unmap": true, 00:15:27.885 "flush": true, 00:15:27.885 "reset": true, 00:15:27.885 "nvme_admin": false, 00:15:27.885 "nvme_io": false, 00:15:27.885 "nvme_io_md": false, 00:15:27.885 "write_zeroes": true, 00:15:27.885 "zcopy": true, 00:15:27.885 "get_zone_info": false, 00:15:27.885 "zone_management": false, 00:15:27.885 "zone_append": false, 00:15:27.885 "compare": false, 00:15:27.885 "compare_and_write": false, 00:15:27.885 "abort": true, 00:15:27.885 "seek_hole": false, 00:15:27.885 "seek_data": false, 00:15:27.885 "copy": true, 00:15:27.885 "nvme_iov_md": false 00:15:27.885 }, 00:15:27.885 "memory_domains": [ 00:15:27.885 { 00:15:27.885 "dma_device_id": "system", 00:15:27.885 "dma_device_type": 1 00:15:27.885 }, 00:15:27.885 { 00:15:27.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.885 "dma_device_type": 2 00:15:27.885 } 00:15:27.885 ], 00:15:27.885 "driver_specific": {} 00:15:27.885 } 00:15:27.885 ] 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.885 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.885 [2024-11-21 03:24:15.442758] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:27.886 [2024-11-21 03:24:15.442848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:27.886 [2024-11-21 03:24:15.442897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:27.886 [2024-11-21 03:24:15.444703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:27.886 [2024-11-21 03:24:15.444808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:28.144 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.144 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:28.144 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.144 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.144 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.144 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.144 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:28.145 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.145 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.145 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.145 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.145 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.145 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.145 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.145 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.145 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.145 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.145 "name": "Existed_Raid", 00:15:28.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.145 "strip_size_kb": 64, 00:15:28.145 "state": "configuring", 00:15:28.145 "raid_level": "raid5f", 00:15:28.145 "superblock": false, 00:15:28.145 "num_base_bdevs": 4, 00:15:28.145 "num_base_bdevs_discovered": 3, 00:15:28.145 "num_base_bdevs_operational": 4, 00:15:28.145 "base_bdevs_list": [ 00:15:28.145 { 00:15:28.145 "name": "BaseBdev1", 00:15:28.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.145 "is_configured": false, 00:15:28.145 "data_offset": 0, 00:15:28.145 "data_size": 0 00:15:28.145 }, 00:15:28.145 { 00:15:28.145 "name": "BaseBdev2", 00:15:28.145 "uuid": "920b6fdf-c25a-467d-a474-d24ff6cd5f6a", 00:15:28.145 "is_configured": true, 00:15:28.145 "data_offset": 0, 00:15:28.145 "data_size": 65536 00:15:28.145 }, 00:15:28.145 { 00:15:28.145 "name": "BaseBdev3", 00:15:28.145 "uuid": "243bb356-b5f4-44dd-8a3c-c918730479ac", 00:15:28.145 "is_configured": true, 00:15:28.145 "data_offset": 0, 00:15:28.145 "data_size": 65536 00:15:28.145 }, 00:15:28.145 { 00:15:28.145 "name": "BaseBdev4", 00:15:28.145 "uuid": "b9b36cf0-6156-4d46-a88b-333dd7537df6", 00:15:28.145 "is_configured": true, 00:15:28.145 "data_offset": 0, 00:15:28.145 "data_size": 65536 00:15:28.145 } 00:15:28.145 ] 00:15:28.145 }' 00:15:28.145 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.145 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.405 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:28.405 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.405 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.405 [2024-11-21 03:24:15.890817] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:28.405 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.405 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:28.405 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.405 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.405 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.405 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.405 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:28.405 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.405 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.405 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.405 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.405 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.405 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.405 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.405 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.405 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.405 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.405 "name": "Existed_Raid", 00:15:28.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.405 "strip_size_kb": 64, 00:15:28.405 "state": "configuring", 00:15:28.405 "raid_level": "raid5f", 00:15:28.405 "superblock": false, 00:15:28.405 "num_base_bdevs": 4, 00:15:28.405 "num_base_bdevs_discovered": 2, 00:15:28.405 "num_base_bdevs_operational": 4, 00:15:28.405 "base_bdevs_list": [ 00:15:28.405 { 00:15:28.405 "name": "BaseBdev1", 00:15:28.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.405 "is_configured": false, 00:15:28.405 "data_offset": 0, 00:15:28.405 "data_size": 0 00:15:28.405 }, 00:15:28.405 { 00:15:28.405 "name": null, 00:15:28.405 "uuid": "920b6fdf-c25a-467d-a474-d24ff6cd5f6a", 00:15:28.405 "is_configured": false, 00:15:28.405 "data_offset": 0, 00:15:28.405 "data_size": 65536 00:15:28.405 }, 00:15:28.405 { 00:15:28.405 "name": "BaseBdev3", 00:15:28.405 "uuid": "243bb356-b5f4-44dd-8a3c-c918730479ac", 00:15:28.405 "is_configured": true, 00:15:28.405 "data_offset": 0, 00:15:28.405 "data_size": 65536 00:15:28.405 }, 00:15:28.405 { 00:15:28.405 "name": "BaseBdev4", 00:15:28.405 "uuid": "b9b36cf0-6156-4d46-a88b-333dd7537df6", 00:15:28.405 "is_configured": true, 00:15:28.405 "data_offset": 0, 00:15:28.405 "data_size": 65536 00:15:28.405 } 00:15:28.405 ] 00:15:28.405 }' 00:15:28.405 03:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.405 03:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.018 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:29.018 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.018 03:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.018 03:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.018 03:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.018 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:29.018 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:29.018 03:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.018 03:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.018 [2024-11-21 03:24:16.474128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:29.018 BaseBdev1 00:15:29.018 03:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.018 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:29.018 03:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:29.018 03:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:29.018 03:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:29.018 03:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:29.018 03:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:29.018 03:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:29.018 03:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.018 03:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.018 03:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.018 03:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:29.018 03:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.018 03:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.018 [ 00:15:29.018 { 00:15:29.018 "name": "BaseBdev1", 00:15:29.018 "aliases": [ 00:15:29.018 "fd2573dc-a79c-451f-bc0c-1824a15fc5f3" 00:15:29.018 ], 00:15:29.018 "product_name": "Malloc disk", 00:15:29.018 "block_size": 512, 00:15:29.018 "num_blocks": 65536, 00:15:29.018 "uuid": "fd2573dc-a79c-451f-bc0c-1824a15fc5f3", 00:15:29.018 "assigned_rate_limits": { 00:15:29.018 "rw_ios_per_sec": 0, 00:15:29.018 "rw_mbytes_per_sec": 0, 00:15:29.018 "r_mbytes_per_sec": 0, 00:15:29.018 "w_mbytes_per_sec": 0 00:15:29.018 }, 00:15:29.018 "claimed": true, 00:15:29.018 "claim_type": "exclusive_write", 00:15:29.018 "zoned": false, 00:15:29.018 "supported_io_types": { 00:15:29.018 "read": true, 00:15:29.018 "write": true, 00:15:29.018 "unmap": true, 00:15:29.018 "flush": true, 00:15:29.018 "reset": true, 00:15:29.018 "nvme_admin": false, 00:15:29.018 "nvme_io": false, 00:15:29.018 "nvme_io_md": false, 00:15:29.018 "write_zeroes": true, 00:15:29.018 "zcopy": true, 00:15:29.018 "get_zone_info": false, 00:15:29.018 "zone_management": false, 00:15:29.018 "zone_append": false, 00:15:29.018 "compare": false, 00:15:29.018 "compare_and_write": false, 00:15:29.018 "abort": true, 00:15:29.018 "seek_hole": false, 00:15:29.018 "seek_data": false, 00:15:29.018 "copy": true, 00:15:29.018 "nvme_iov_md": false 00:15:29.018 }, 00:15:29.018 "memory_domains": [ 00:15:29.018 { 00:15:29.018 "dma_device_id": "system", 00:15:29.018 "dma_device_type": 1 00:15:29.018 }, 00:15:29.018 { 00:15:29.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.018 "dma_device_type": 2 00:15:29.018 } 00:15:29.018 ], 00:15:29.018 "driver_specific": {} 00:15:29.018 } 00:15:29.018 ] 00:15:29.018 03:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.018 03:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:29.018 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:29.018 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.019 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.019 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.019 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.019 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:29.019 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.019 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.019 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.019 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.019 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.019 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.019 03:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.019 03:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.019 03:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.292 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.292 "name": "Existed_Raid", 00:15:29.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.292 "strip_size_kb": 64, 00:15:29.292 "state": "configuring", 00:15:29.292 "raid_level": "raid5f", 00:15:29.292 "superblock": false, 00:15:29.292 "num_base_bdevs": 4, 00:15:29.292 "num_base_bdevs_discovered": 3, 00:15:29.292 "num_base_bdevs_operational": 4, 00:15:29.292 "base_bdevs_list": [ 00:15:29.292 { 00:15:29.292 "name": "BaseBdev1", 00:15:29.292 "uuid": "fd2573dc-a79c-451f-bc0c-1824a15fc5f3", 00:15:29.292 "is_configured": true, 00:15:29.292 "data_offset": 0, 00:15:29.292 "data_size": 65536 00:15:29.292 }, 00:15:29.292 { 00:15:29.292 "name": null, 00:15:29.292 "uuid": "920b6fdf-c25a-467d-a474-d24ff6cd5f6a", 00:15:29.292 "is_configured": false, 00:15:29.292 "data_offset": 0, 00:15:29.292 "data_size": 65536 00:15:29.292 }, 00:15:29.292 { 00:15:29.292 "name": "BaseBdev3", 00:15:29.292 "uuid": "243bb356-b5f4-44dd-8a3c-c918730479ac", 00:15:29.292 "is_configured": true, 00:15:29.292 "data_offset": 0, 00:15:29.292 "data_size": 65536 00:15:29.292 }, 00:15:29.292 { 00:15:29.292 "name": "BaseBdev4", 00:15:29.292 "uuid": "b9b36cf0-6156-4d46-a88b-333dd7537df6", 00:15:29.292 "is_configured": true, 00:15:29.292 "data_offset": 0, 00:15:29.292 "data_size": 65536 00:15:29.292 } 00:15:29.292 ] 00:15:29.292 }' 00:15:29.292 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.292 03:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.551 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.552 03:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.552 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:29.552 03:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.552 03:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.552 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:29.552 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:29.552 03:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.552 03:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.552 [2024-11-21 03:24:16.990296] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:29.552 03:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.552 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:29.552 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.552 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.552 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.552 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.552 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:29.552 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.552 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.552 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.552 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.552 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.552 03:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.552 03:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.552 03:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.552 03:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.552 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.552 "name": "Existed_Raid", 00:15:29.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.552 "strip_size_kb": 64, 00:15:29.552 "state": "configuring", 00:15:29.552 "raid_level": "raid5f", 00:15:29.552 "superblock": false, 00:15:29.552 "num_base_bdevs": 4, 00:15:29.552 "num_base_bdevs_discovered": 2, 00:15:29.552 "num_base_bdevs_operational": 4, 00:15:29.552 "base_bdevs_list": [ 00:15:29.552 { 00:15:29.552 "name": "BaseBdev1", 00:15:29.552 "uuid": "fd2573dc-a79c-451f-bc0c-1824a15fc5f3", 00:15:29.552 "is_configured": true, 00:15:29.552 "data_offset": 0, 00:15:29.552 "data_size": 65536 00:15:29.552 }, 00:15:29.552 { 00:15:29.552 "name": null, 00:15:29.552 "uuid": "920b6fdf-c25a-467d-a474-d24ff6cd5f6a", 00:15:29.552 "is_configured": false, 00:15:29.552 "data_offset": 0, 00:15:29.552 "data_size": 65536 00:15:29.552 }, 00:15:29.552 { 00:15:29.552 "name": null, 00:15:29.552 "uuid": "243bb356-b5f4-44dd-8a3c-c918730479ac", 00:15:29.552 "is_configured": false, 00:15:29.552 "data_offset": 0, 00:15:29.552 "data_size": 65536 00:15:29.552 }, 00:15:29.552 { 00:15:29.552 "name": "BaseBdev4", 00:15:29.552 "uuid": "b9b36cf0-6156-4d46-a88b-333dd7537df6", 00:15:29.552 "is_configured": true, 00:15:29.552 "data_offset": 0, 00:15:29.552 "data_size": 65536 00:15:29.552 } 00:15:29.552 ] 00:15:29.552 }' 00:15:29.552 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.552 03:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.121 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:30.121 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.121 03:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.121 03:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.121 03:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.121 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:30.121 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:30.121 03:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.121 03:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.121 [2024-11-21 03:24:17.454457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:30.121 03:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.121 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:30.121 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.121 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.121 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.121 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.121 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:30.121 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.121 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.121 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.121 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.121 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.121 03:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.121 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.121 03:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.121 03:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.121 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.121 "name": "Existed_Raid", 00:15:30.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.121 "strip_size_kb": 64, 00:15:30.121 "state": "configuring", 00:15:30.121 "raid_level": "raid5f", 00:15:30.121 "superblock": false, 00:15:30.121 "num_base_bdevs": 4, 00:15:30.121 "num_base_bdevs_discovered": 3, 00:15:30.121 "num_base_bdevs_operational": 4, 00:15:30.121 "base_bdevs_list": [ 00:15:30.121 { 00:15:30.121 "name": "BaseBdev1", 00:15:30.121 "uuid": "fd2573dc-a79c-451f-bc0c-1824a15fc5f3", 00:15:30.121 "is_configured": true, 00:15:30.121 "data_offset": 0, 00:15:30.121 "data_size": 65536 00:15:30.121 }, 00:15:30.121 { 00:15:30.121 "name": null, 00:15:30.121 "uuid": "920b6fdf-c25a-467d-a474-d24ff6cd5f6a", 00:15:30.121 "is_configured": false, 00:15:30.121 "data_offset": 0, 00:15:30.121 "data_size": 65536 00:15:30.121 }, 00:15:30.121 { 00:15:30.121 "name": "BaseBdev3", 00:15:30.121 "uuid": "243bb356-b5f4-44dd-8a3c-c918730479ac", 00:15:30.121 "is_configured": true, 00:15:30.121 "data_offset": 0, 00:15:30.121 "data_size": 65536 00:15:30.121 }, 00:15:30.121 { 00:15:30.121 "name": "BaseBdev4", 00:15:30.121 "uuid": "b9b36cf0-6156-4d46-a88b-333dd7537df6", 00:15:30.121 "is_configured": true, 00:15:30.121 "data_offset": 0, 00:15:30.121 "data_size": 65536 00:15:30.121 } 00:15:30.121 ] 00:15:30.121 }' 00:15:30.121 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.121 03:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.381 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:30.381 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.381 03:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.381 03:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.640 03:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.640 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:30.640 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:30.640 03:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.640 03:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.640 [2024-11-21 03:24:17.958592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:30.640 03:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.640 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:30.640 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.640 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.640 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.640 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.640 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:30.640 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.640 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.640 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.640 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.640 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.640 03:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.640 03:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.640 03:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.640 03:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.640 03:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.640 "name": "Existed_Raid", 00:15:30.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.640 "strip_size_kb": 64, 00:15:30.640 "state": "configuring", 00:15:30.640 "raid_level": "raid5f", 00:15:30.640 "superblock": false, 00:15:30.640 "num_base_bdevs": 4, 00:15:30.640 "num_base_bdevs_discovered": 2, 00:15:30.640 "num_base_bdevs_operational": 4, 00:15:30.640 "base_bdevs_list": [ 00:15:30.640 { 00:15:30.640 "name": null, 00:15:30.640 "uuid": "fd2573dc-a79c-451f-bc0c-1824a15fc5f3", 00:15:30.640 "is_configured": false, 00:15:30.640 "data_offset": 0, 00:15:30.640 "data_size": 65536 00:15:30.640 }, 00:15:30.640 { 00:15:30.640 "name": null, 00:15:30.640 "uuid": "920b6fdf-c25a-467d-a474-d24ff6cd5f6a", 00:15:30.640 "is_configured": false, 00:15:30.640 "data_offset": 0, 00:15:30.640 "data_size": 65536 00:15:30.640 }, 00:15:30.640 { 00:15:30.640 "name": "BaseBdev3", 00:15:30.640 "uuid": "243bb356-b5f4-44dd-8a3c-c918730479ac", 00:15:30.640 "is_configured": true, 00:15:30.640 "data_offset": 0, 00:15:30.640 "data_size": 65536 00:15:30.640 }, 00:15:30.640 { 00:15:30.640 "name": "BaseBdev4", 00:15:30.640 "uuid": "b9b36cf0-6156-4d46-a88b-333dd7537df6", 00:15:30.640 "is_configured": true, 00:15:30.640 "data_offset": 0, 00:15:30.640 "data_size": 65536 00:15:30.640 } 00:15:30.640 ] 00:15:30.640 }' 00:15:30.640 03:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.640 03:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.900 03:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:30.900 03:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.900 03:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.900 03:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.900 03:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.900 03:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:30.900 03:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:30.900 03:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.900 03:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.900 [2024-11-21 03:24:18.457085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:30.900 03:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.900 03:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:30.900 03:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.900 03:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.900 03:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.160 03:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.160 03:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:31.160 03:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.160 03:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.160 03:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.160 03:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.160 03:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.160 03:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.160 03:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.160 03:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.160 03:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.160 03:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.160 "name": "Existed_Raid", 00:15:31.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.160 "strip_size_kb": 64, 00:15:31.160 "state": "configuring", 00:15:31.160 "raid_level": "raid5f", 00:15:31.160 "superblock": false, 00:15:31.160 "num_base_bdevs": 4, 00:15:31.160 "num_base_bdevs_discovered": 3, 00:15:31.160 "num_base_bdevs_operational": 4, 00:15:31.160 "base_bdevs_list": [ 00:15:31.160 { 00:15:31.160 "name": null, 00:15:31.160 "uuid": "fd2573dc-a79c-451f-bc0c-1824a15fc5f3", 00:15:31.160 "is_configured": false, 00:15:31.160 "data_offset": 0, 00:15:31.160 "data_size": 65536 00:15:31.160 }, 00:15:31.160 { 00:15:31.160 "name": "BaseBdev2", 00:15:31.160 "uuid": "920b6fdf-c25a-467d-a474-d24ff6cd5f6a", 00:15:31.160 "is_configured": true, 00:15:31.160 "data_offset": 0, 00:15:31.160 "data_size": 65536 00:15:31.160 }, 00:15:31.160 { 00:15:31.160 "name": "BaseBdev3", 00:15:31.160 "uuid": "243bb356-b5f4-44dd-8a3c-c918730479ac", 00:15:31.160 "is_configured": true, 00:15:31.160 "data_offset": 0, 00:15:31.160 "data_size": 65536 00:15:31.160 }, 00:15:31.160 { 00:15:31.160 "name": "BaseBdev4", 00:15:31.160 "uuid": "b9b36cf0-6156-4d46-a88b-333dd7537df6", 00:15:31.160 "is_configured": true, 00:15:31.160 "data_offset": 0, 00:15:31.160 "data_size": 65536 00:15:31.160 } 00:15:31.160 ] 00:15:31.160 }' 00:15:31.160 03:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.160 03:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.419 03:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.419 03:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.419 03:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.419 03:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:31.419 03:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.419 03:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:31.419 03:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.419 03:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.419 03:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.420 03:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:31.680 03:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fd2573dc-a79c-451f-bc0c-1824a15fc5f3 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.680 [2024-11-21 03:24:19.032129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:31.680 [2024-11-21 03:24:19.032245] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:31.680 [2024-11-21 03:24:19.032262] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:31.680 [2024-11-21 03:24:19.032539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:15:31.680 [2024-11-21 03:24:19.032998] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:31.680 [2024-11-21 03:24:19.033008] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:31.680 [2024-11-21 03:24:19.033198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.680 NewBaseBdev 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.680 [ 00:15:31.680 { 00:15:31.680 "name": "NewBaseBdev", 00:15:31.680 "aliases": [ 00:15:31.680 "fd2573dc-a79c-451f-bc0c-1824a15fc5f3" 00:15:31.680 ], 00:15:31.680 "product_name": "Malloc disk", 00:15:31.680 "block_size": 512, 00:15:31.680 "num_blocks": 65536, 00:15:31.680 "uuid": "fd2573dc-a79c-451f-bc0c-1824a15fc5f3", 00:15:31.680 "assigned_rate_limits": { 00:15:31.680 "rw_ios_per_sec": 0, 00:15:31.680 "rw_mbytes_per_sec": 0, 00:15:31.680 "r_mbytes_per_sec": 0, 00:15:31.680 "w_mbytes_per_sec": 0 00:15:31.680 }, 00:15:31.680 "claimed": true, 00:15:31.680 "claim_type": "exclusive_write", 00:15:31.680 "zoned": false, 00:15:31.680 "supported_io_types": { 00:15:31.680 "read": true, 00:15:31.680 "write": true, 00:15:31.680 "unmap": true, 00:15:31.680 "flush": true, 00:15:31.680 "reset": true, 00:15:31.680 "nvme_admin": false, 00:15:31.680 "nvme_io": false, 00:15:31.680 "nvme_io_md": false, 00:15:31.680 "write_zeroes": true, 00:15:31.680 "zcopy": true, 00:15:31.680 "get_zone_info": false, 00:15:31.680 "zone_management": false, 00:15:31.680 "zone_append": false, 00:15:31.680 "compare": false, 00:15:31.680 "compare_and_write": false, 00:15:31.680 "abort": true, 00:15:31.680 "seek_hole": false, 00:15:31.680 "seek_data": false, 00:15:31.680 "copy": true, 00:15:31.680 "nvme_iov_md": false 00:15:31.680 }, 00:15:31.680 "memory_domains": [ 00:15:31.680 { 00:15:31.680 "dma_device_id": "system", 00:15:31.680 "dma_device_type": 1 00:15:31.680 }, 00:15:31.680 { 00:15:31.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.680 "dma_device_type": 2 00:15:31.680 } 00:15:31.680 ], 00:15:31.680 "driver_specific": {} 00:15:31.680 } 00:15:31.680 ] 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.680 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.680 "name": "Existed_Raid", 00:15:31.680 "uuid": "9f3bc130-281a-43e2-b725-2f2c084dcee1", 00:15:31.680 "strip_size_kb": 64, 00:15:31.680 "state": "online", 00:15:31.680 "raid_level": "raid5f", 00:15:31.680 "superblock": false, 00:15:31.680 "num_base_bdevs": 4, 00:15:31.680 "num_base_bdevs_discovered": 4, 00:15:31.680 "num_base_bdevs_operational": 4, 00:15:31.680 "base_bdevs_list": [ 00:15:31.680 { 00:15:31.680 "name": "NewBaseBdev", 00:15:31.680 "uuid": "fd2573dc-a79c-451f-bc0c-1824a15fc5f3", 00:15:31.680 "is_configured": true, 00:15:31.680 "data_offset": 0, 00:15:31.680 "data_size": 65536 00:15:31.680 }, 00:15:31.680 { 00:15:31.680 "name": "BaseBdev2", 00:15:31.680 "uuid": "920b6fdf-c25a-467d-a474-d24ff6cd5f6a", 00:15:31.680 "is_configured": true, 00:15:31.680 "data_offset": 0, 00:15:31.680 "data_size": 65536 00:15:31.680 }, 00:15:31.680 { 00:15:31.680 "name": "BaseBdev3", 00:15:31.680 "uuid": "243bb356-b5f4-44dd-8a3c-c918730479ac", 00:15:31.680 "is_configured": true, 00:15:31.680 "data_offset": 0, 00:15:31.680 "data_size": 65536 00:15:31.680 }, 00:15:31.681 { 00:15:31.681 "name": "BaseBdev4", 00:15:31.681 "uuid": "b9b36cf0-6156-4d46-a88b-333dd7537df6", 00:15:31.681 "is_configured": true, 00:15:31.681 "data_offset": 0, 00:15:31.681 "data_size": 65536 00:15:31.681 } 00:15:31.681 ] 00:15:31.681 }' 00:15:31.681 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.681 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.251 [2024-11-21 03:24:19.556494] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:32.251 "name": "Existed_Raid", 00:15:32.251 "aliases": [ 00:15:32.251 "9f3bc130-281a-43e2-b725-2f2c084dcee1" 00:15:32.251 ], 00:15:32.251 "product_name": "Raid Volume", 00:15:32.251 "block_size": 512, 00:15:32.251 "num_blocks": 196608, 00:15:32.251 "uuid": "9f3bc130-281a-43e2-b725-2f2c084dcee1", 00:15:32.251 "assigned_rate_limits": { 00:15:32.251 "rw_ios_per_sec": 0, 00:15:32.251 "rw_mbytes_per_sec": 0, 00:15:32.251 "r_mbytes_per_sec": 0, 00:15:32.251 "w_mbytes_per_sec": 0 00:15:32.251 }, 00:15:32.251 "claimed": false, 00:15:32.251 "zoned": false, 00:15:32.251 "supported_io_types": { 00:15:32.251 "read": true, 00:15:32.251 "write": true, 00:15:32.251 "unmap": false, 00:15:32.251 "flush": false, 00:15:32.251 "reset": true, 00:15:32.251 "nvme_admin": false, 00:15:32.251 "nvme_io": false, 00:15:32.251 "nvme_io_md": false, 00:15:32.251 "write_zeroes": true, 00:15:32.251 "zcopy": false, 00:15:32.251 "get_zone_info": false, 00:15:32.251 "zone_management": false, 00:15:32.251 "zone_append": false, 00:15:32.251 "compare": false, 00:15:32.251 "compare_and_write": false, 00:15:32.251 "abort": false, 00:15:32.251 "seek_hole": false, 00:15:32.251 "seek_data": false, 00:15:32.251 "copy": false, 00:15:32.251 "nvme_iov_md": false 00:15:32.251 }, 00:15:32.251 "driver_specific": { 00:15:32.251 "raid": { 00:15:32.251 "uuid": "9f3bc130-281a-43e2-b725-2f2c084dcee1", 00:15:32.251 "strip_size_kb": 64, 00:15:32.251 "state": "online", 00:15:32.251 "raid_level": "raid5f", 00:15:32.251 "superblock": false, 00:15:32.251 "num_base_bdevs": 4, 00:15:32.251 "num_base_bdevs_discovered": 4, 00:15:32.251 "num_base_bdevs_operational": 4, 00:15:32.251 "base_bdevs_list": [ 00:15:32.251 { 00:15:32.251 "name": "NewBaseBdev", 00:15:32.251 "uuid": "fd2573dc-a79c-451f-bc0c-1824a15fc5f3", 00:15:32.251 "is_configured": true, 00:15:32.251 "data_offset": 0, 00:15:32.251 "data_size": 65536 00:15:32.251 }, 00:15:32.251 { 00:15:32.251 "name": "BaseBdev2", 00:15:32.251 "uuid": "920b6fdf-c25a-467d-a474-d24ff6cd5f6a", 00:15:32.251 "is_configured": true, 00:15:32.251 "data_offset": 0, 00:15:32.251 "data_size": 65536 00:15:32.251 }, 00:15:32.251 { 00:15:32.251 "name": "BaseBdev3", 00:15:32.251 "uuid": "243bb356-b5f4-44dd-8a3c-c918730479ac", 00:15:32.251 "is_configured": true, 00:15:32.251 "data_offset": 0, 00:15:32.251 "data_size": 65536 00:15:32.251 }, 00:15:32.251 { 00:15:32.251 "name": "BaseBdev4", 00:15:32.251 "uuid": "b9b36cf0-6156-4d46-a88b-333dd7537df6", 00:15:32.251 "is_configured": true, 00:15:32.251 "data_offset": 0, 00:15:32.251 "data_size": 65536 00:15:32.251 } 00:15:32.251 ] 00:15:32.251 } 00:15:32.251 } 00:15:32.251 }' 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:32.251 BaseBdev2 00:15:32.251 BaseBdev3 00:15:32.251 BaseBdev4' 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:32.251 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:32.512 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:32.512 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.512 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.512 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.512 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.512 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:32.512 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:32.512 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:32.512 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.512 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.512 [2024-11-21 03:24:19.864379] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:32.512 [2024-11-21 03:24:19.864444] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:32.512 [2024-11-21 03:24:19.864524] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:32.512 [2024-11-21 03:24:19.864797] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:32.512 [2024-11-21 03:24:19.864857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:32.512 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.512 03:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 95296 00:15:32.512 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 95296 ']' 00:15:32.512 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 95296 00:15:32.512 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:32.512 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:32.512 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95296 00:15:32.512 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:32.512 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:32.512 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95296' 00:15:32.512 killing process with pid 95296 00:15:32.512 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 95296 00:15:32.512 [2024-11-21 03:24:19.915728] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:32.512 03:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 95296 00:15:32.512 [2024-11-21 03:24:19.956194] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:32.773 03:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:32.773 00:15:32.773 real 0m9.818s 00:15:32.773 user 0m16.741s 00:15:32.773 sys 0m2.165s 00:15:32.773 03:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:32.773 03:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.773 ************************************ 00:15:32.773 END TEST raid5f_state_function_test 00:15:32.773 ************************************ 00:15:32.773 03:24:20 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:15:32.773 03:24:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:32.773 03:24:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:32.773 03:24:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:32.773 ************************************ 00:15:32.773 START TEST raid5f_state_function_test_sb 00:15:32.773 ************************************ 00:15:32.773 03:24:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:15:32.773 03:24:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:32.773 03:24:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:32.773 03:24:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:32.773 03:24:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:32.773 03:24:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:32.773 03:24:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.773 03:24:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:32.773 03:24:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:32.774 03:24:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.774 03:24:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:32.774 03:24:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:32.774 03:24:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.774 03:24:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:32.774 03:24:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:32.774 03:24:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.774 03:24:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:32.774 03:24:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:32.774 03:24:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.774 03:24:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:32.774 03:24:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:32.774 03:24:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:32.774 03:24:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:32.774 03:24:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:32.774 03:24:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:32.774 03:24:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:32.774 03:24:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:32.774 03:24:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:32.774 03:24:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:32.774 03:24:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:32.774 03:24:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=95945 00:15:32.774 Process raid pid: 95945 00:15:32.774 03:24:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:32.774 03:24:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 95945' 00:15:32.774 03:24:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 95945 00:15:32.774 03:24:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 95945 ']' 00:15:32.774 03:24:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.774 03:24:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:32.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.774 03:24:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.774 03:24:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:32.774 03:24:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.034 [2024-11-21 03:24:20.372241] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:15:33.034 [2024-11-21 03:24:20.372365] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.034 [2024-11-21 03:24:20.508051] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:33.034 [2024-11-21 03:24:20.547162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.034 [2024-11-21 03:24:20.574296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.294 [2024-11-21 03:24:20.618794] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.294 [2024-11-21 03:24:20.618832] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.864 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:33.864 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:33.864 03:24:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:33.864 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.864 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.864 [2024-11-21 03:24:21.201933] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:33.864 [2024-11-21 03:24:21.201984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:33.864 [2024-11-21 03:24:21.201995] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:33.864 [2024-11-21 03:24:21.202002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:33.864 [2024-11-21 03:24:21.202011] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:33.864 [2024-11-21 03:24:21.202030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:33.864 [2024-11-21 03:24:21.202037] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:33.864 [2024-11-21 03:24:21.202043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:33.864 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.864 03:24:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:33.864 03:24:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.864 03:24:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:33.864 03:24:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.864 03:24:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.864 03:24:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:33.864 03:24:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.864 03:24:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.864 03:24:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.864 03:24:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.864 03:24:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.864 03:24:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.864 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.864 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.864 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.864 03:24:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.864 "name": "Existed_Raid", 00:15:33.864 "uuid": "d4a202a9-a98b-4ad2-a5d1-df5849ade298", 00:15:33.864 "strip_size_kb": 64, 00:15:33.864 "state": "configuring", 00:15:33.864 "raid_level": "raid5f", 00:15:33.864 "superblock": true, 00:15:33.864 "num_base_bdevs": 4, 00:15:33.864 "num_base_bdevs_discovered": 0, 00:15:33.864 "num_base_bdevs_operational": 4, 00:15:33.864 "base_bdevs_list": [ 00:15:33.864 { 00:15:33.864 "name": "BaseBdev1", 00:15:33.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.864 "is_configured": false, 00:15:33.864 "data_offset": 0, 00:15:33.864 "data_size": 0 00:15:33.864 }, 00:15:33.864 { 00:15:33.864 "name": "BaseBdev2", 00:15:33.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.864 "is_configured": false, 00:15:33.864 "data_offset": 0, 00:15:33.864 "data_size": 0 00:15:33.864 }, 00:15:33.864 { 00:15:33.864 "name": "BaseBdev3", 00:15:33.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.864 "is_configured": false, 00:15:33.864 "data_offset": 0, 00:15:33.864 "data_size": 0 00:15:33.864 }, 00:15:33.864 { 00:15:33.864 "name": "BaseBdev4", 00:15:33.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.864 "is_configured": false, 00:15:33.864 "data_offset": 0, 00:15:33.864 "data_size": 0 00:15:33.864 } 00:15:33.864 ] 00:15:33.864 }' 00:15:33.864 03:24:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.864 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.125 03:24:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:34.125 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.125 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.125 [2024-11-21 03:24:21.613934] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:34.125 [2024-11-21 03:24:21.614011] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:15:34.125 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.125 03:24:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:34.125 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.125 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.125 [2024-11-21 03:24:21.625979] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:34.125 [2024-11-21 03:24:21.626064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:34.125 [2024-11-21 03:24:21.626093] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:34.125 [2024-11-21 03:24:21.626113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:34.125 [2024-11-21 03:24:21.626132] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:34.126 [2024-11-21 03:24:21.626150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:34.126 [2024-11-21 03:24:21.626169] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:34.126 [2024-11-21 03:24:21.626203] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:34.126 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.126 03:24:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:34.126 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.126 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.126 [2024-11-21 03:24:21.646779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:34.126 BaseBdev1 00:15:34.126 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.126 03:24:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:34.126 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:34.126 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:34.126 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:34.126 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:34.126 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:34.126 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:34.126 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.126 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.126 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.126 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:34.126 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.126 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.126 [ 00:15:34.126 { 00:15:34.126 "name": "BaseBdev1", 00:15:34.126 "aliases": [ 00:15:34.126 "c9b43034-2fec-4f0b-9bbb-1a155562ea0a" 00:15:34.126 ], 00:15:34.126 "product_name": "Malloc disk", 00:15:34.126 "block_size": 512, 00:15:34.126 "num_blocks": 65536, 00:15:34.126 "uuid": "c9b43034-2fec-4f0b-9bbb-1a155562ea0a", 00:15:34.126 "assigned_rate_limits": { 00:15:34.126 "rw_ios_per_sec": 0, 00:15:34.126 "rw_mbytes_per_sec": 0, 00:15:34.126 "r_mbytes_per_sec": 0, 00:15:34.126 "w_mbytes_per_sec": 0 00:15:34.126 }, 00:15:34.126 "claimed": true, 00:15:34.126 "claim_type": "exclusive_write", 00:15:34.126 "zoned": false, 00:15:34.126 "supported_io_types": { 00:15:34.126 "read": true, 00:15:34.126 "write": true, 00:15:34.126 "unmap": true, 00:15:34.126 "flush": true, 00:15:34.126 "reset": true, 00:15:34.126 "nvme_admin": false, 00:15:34.126 "nvme_io": false, 00:15:34.126 "nvme_io_md": false, 00:15:34.126 "write_zeroes": true, 00:15:34.126 "zcopy": true, 00:15:34.126 "get_zone_info": false, 00:15:34.126 "zone_management": false, 00:15:34.126 "zone_append": false, 00:15:34.126 "compare": false, 00:15:34.126 "compare_and_write": false, 00:15:34.126 "abort": true, 00:15:34.126 "seek_hole": false, 00:15:34.126 "seek_data": false, 00:15:34.126 "copy": true, 00:15:34.126 "nvme_iov_md": false 00:15:34.126 }, 00:15:34.126 "memory_domains": [ 00:15:34.126 { 00:15:34.126 "dma_device_id": "system", 00:15:34.126 "dma_device_type": 1 00:15:34.126 }, 00:15:34.126 { 00:15:34.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.126 "dma_device_type": 2 00:15:34.126 } 00:15:34.126 ], 00:15:34.126 "driver_specific": {} 00:15:34.126 } 00:15:34.126 ] 00:15:34.126 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.126 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:34.126 03:24:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:34.126 03:24:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.126 03:24:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.126 03:24:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.126 03:24:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.126 03:24:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:34.126 03:24:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.126 03:24:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.126 03:24:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.126 03:24:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.387 03:24:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.387 03:24:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.387 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.387 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.387 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.387 03:24:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.387 "name": "Existed_Raid", 00:15:34.387 "uuid": "a338df92-55d9-4e88-8248-8e57dafe50bd", 00:15:34.387 "strip_size_kb": 64, 00:15:34.387 "state": "configuring", 00:15:34.387 "raid_level": "raid5f", 00:15:34.387 "superblock": true, 00:15:34.387 "num_base_bdevs": 4, 00:15:34.387 "num_base_bdevs_discovered": 1, 00:15:34.387 "num_base_bdevs_operational": 4, 00:15:34.387 "base_bdevs_list": [ 00:15:34.387 { 00:15:34.387 "name": "BaseBdev1", 00:15:34.387 "uuid": "c9b43034-2fec-4f0b-9bbb-1a155562ea0a", 00:15:34.387 "is_configured": true, 00:15:34.387 "data_offset": 2048, 00:15:34.387 "data_size": 63488 00:15:34.387 }, 00:15:34.387 { 00:15:34.387 "name": "BaseBdev2", 00:15:34.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.387 "is_configured": false, 00:15:34.387 "data_offset": 0, 00:15:34.387 "data_size": 0 00:15:34.387 }, 00:15:34.387 { 00:15:34.387 "name": "BaseBdev3", 00:15:34.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.387 "is_configured": false, 00:15:34.387 "data_offset": 0, 00:15:34.387 "data_size": 0 00:15:34.387 }, 00:15:34.387 { 00:15:34.387 "name": "BaseBdev4", 00:15:34.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.387 "is_configured": false, 00:15:34.387 "data_offset": 0, 00:15:34.387 "data_size": 0 00:15:34.387 } 00:15:34.387 ] 00:15:34.387 }' 00:15:34.387 03:24:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.387 03:24:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.647 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:34.647 03:24:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.647 03:24:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.647 [2024-11-21 03:24:22.146930] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:34.647 [2024-11-21 03:24:22.147044] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:34.647 03:24:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.647 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:34.647 03:24:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.647 03:24:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.647 [2024-11-21 03:24:22.158983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:34.647 [2024-11-21 03:24:22.160791] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:34.647 [2024-11-21 03:24:22.160830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:34.647 [2024-11-21 03:24:22.160840] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:34.647 [2024-11-21 03:24:22.160847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:34.647 [2024-11-21 03:24:22.160854] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:34.647 [2024-11-21 03:24:22.160860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:34.647 03:24:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.647 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:34.647 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:34.647 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:34.647 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.647 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.647 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.647 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.647 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:34.647 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.647 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.647 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.647 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.647 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.647 03:24:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.647 03:24:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.647 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.647 03:24:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.907 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.907 "name": "Existed_Raid", 00:15:34.907 "uuid": "f7cc662a-be3f-4657-8977-4c785fcb471a", 00:15:34.907 "strip_size_kb": 64, 00:15:34.907 "state": "configuring", 00:15:34.907 "raid_level": "raid5f", 00:15:34.907 "superblock": true, 00:15:34.907 "num_base_bdevs": 4, 00:15:34.907 "num_base_bdevs_discovered": 1, 00:15:34.907 "num_base_bdevs_operational": 4, 00:15:34.907 "base_bdevs_list": [ 00:15:34.907 { 00:15:34.907 "name": "BaseBdev1", 00:15:34.907 "uuid": "c9b43034-2fec-4f0b-9bbb-1a155562ea0a", 00:15:34.907 "is_configured": true, 00:15:34.907 "data_offset": 2048, 00:15:34.907 "data_size": 63488 00:15:34.907 }, 00:15:34.907 { 00:15:34.907 "name": "BaseBdev2", 00:15:34.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.907 "is_configured": false, 00:15:34.907 "data_offset": 0, 00:15:34.907 "data_size": 0 00:15:34.907 }, 00:15:34.907 { 00:15:34.907 "name": "BaseBdev3", 00:15:34.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.907 "is_configured": false, 00:15:34.907 "data_offset": 0, 00:15:34.907 "data_size": 0 00:15:34.907 }, 00:15:34.907 { 00:15:34.907 "name": "BaseBdev4", 00:15:34.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.907 "is_configured": false, 00:15:34.907 "data_offset": 0, 00:15:34.907 "data_size": 0 00:15:34.907 } 00:15:34.907 ] 00:15:34.907 }' 00:15:34.907 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.907 03:24:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.168 [2024-11-21 03:24:22.561866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:35.168 BaseBdev2 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.168 [ 00:15:35.168 { 00:15:35.168 "name": "BaseBdev2", 00:15:35.168 "aliases": [ 00:15:35.168 "8e2a365c-3e7d-4604-b739-0291c85e4f04" 00:15:35.168 ], 00:15:35.168 "product_name": "Malloc disk", 00:15:35.168 "block_size": 512, 00:15:35.168 "num_blocks": 65536, 00:15:35.168 "uuid": "8e2a365c-3e7d-4604-b739-0291c85e4f04", 00:15:35.168 "assigned_rate_limits": { 00:15:35.168 "rw_ios_per_sec": 0, 00:15:35.168 "rw_mbytes_per_sec": 0, 00:15:35.168 "r_mbytes_per_sec": 0, 00:15:35.168 "w_mbytes_per_sec": 0 00:15:35.168 }, 00:15:35.168 "claimed": true, 00:15:35.168 "claim_type": "exclusive_write", 00:15:35.168 "zoned": false, 00:15:35.168 "supported_io_types": { 00:15:35.168 "read": true, 00:15:35.168 "write": true, 00:15:35.168 "unmap": true, 00:15:35.168 "flush": true, 00:15:35.168 "reset": true, 00:15:35.168 "nvme_admin": false, 00:15:35.168 "nvme_io": false, 00:15:35.168 "nvme_io_md": false, 00:15:35.168 "write_zeroes": true, 00:15:35.168 "zcopy": true, 00:15:35.168 "get_zone_info": false, 00:15:35.168 "zone_management": false, 00:15:35.168 "zone_append": false, 00:15:35.168 "compare": false, 00:15:35.168 "compare_and_write": false, 00:15:35.168 "abort": true, 00:15:35.168 "seek_hole": false, 00:15:35.168 "seek_data": false, 00:15:35.168 "copy": true, 00:15:35.168 "nvme_iov_md": false 00:15:35.168 }, 00:15:35.168 "memory_domains": [ 00:15:35.168 { 00:15:35.168 "dma_device_id": "system", 00:15:35.168 "dma_device_type": 1 00:15:35.168 }, 00:15:35.168 { 00:15:35.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.168 "dma_device_type": 2 00:15:35.168 } 00:15:35.168 ], 00:15:35.168 "driver_specific": {} 00:15:35.168 } 00:15:35.168 ] 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.168 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.168 "name": "Existed_Raid", 00:15:35.168 "uuid": "f7cc662a-be3f-4657-8977-4c785fcb471a", 00:15:35.168 "strip_size_kb": 64, 00:15:35.168 "state": "configuring", 00:15:35.168 "raid_level": "raid5f", 00:15:35.168 "superblock": true, 00:15:35.168 "num_base_bdevs": 4, 00:15:35.168 "num_base_bdevs_discovered": 2, 00:15:35.168 "num_base_bdevs_operational": 4, 00:15:35.168 "base_bdevs_list": [ 00:15:35.168 { 00:15:35.168 "name": "BaseBdev1", 00:15:35.168 "uuid": "c9b43034-2fec-4f0b-9bbb-1a155562ea0a", 00:15:35.168 "is_configured": true, 00:15:35.168 "data_offset": 2048, 00:15:35.168 "data_size": 63488 00:15:35.168 }, 00:15:35.168 { 00:15:35.168 "name": "BaseBdev2", 00:15:35.168 "uuid": "8e2a365c-3e7d-4604-b739-0291c85e4f04", 00:15:35.168 "is_configured": true, 00:15:35.168 "data_offset": 2048, 00:15:35.168 "data_size": 63488 00:15:35.168 }, 00:15:35.168 { 00:15:35.168 "name": "BaseBdev3", 00:15:35.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.168 "is_configured": false, 00:15:35.169 "data_offset": 0, 00:15:35.169 "data_size": 0 00:15:35.169 }, 00:15:35.169 { 00:15:35.169 "name": "BaseBdev4", 00:15:35.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.169 "is_configured": false, 00:15:35.169 "data_offset": 0, 00:15:35.169 "data_size": 0 00:15:35.169 } 00:15:35.169 ] 00:15:35.169 }' 00:15:35.169 03:24:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.169 03:24:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.739 [2024-11-21 03:24:23.072471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:35.739 BaseBdev3 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.739 [ 00:15:35.739 { 00:15:35.739 "name": "BaseBdev3", 00:15:35.739 "aliases": [ 00:15:35.739 "71c7365a-b685-46a8-8ab5-a68ec1184fe1" 00:15:35.739 ], 00:15:35.739 "product_name": "Malloc disk", 00:15:35.739 "block_size": 512, 00:15:35.739 "num_blocks": 65536, 00:15:35.739 "uuid": "71c7365a-b685-46a8-8ab5-a68ec1184fe1", 00:15:35.739 "assigned_rate_limits": { 00:15:35.739 "rw_ios_per_sec": 0, 00:15:35.739 "rw_mbytes_per_sec": 0, 00:15:35.739 "r_mbytes_per_sec": 0, 00:15:35.739 "w_mbytes_per_sec": 0 00:15:35.739 }, 00:15:35.739 "claimed": true, 00:15:35.739 "claim_type": "exclusive_write", 00:15:35.739 "zoned": false, 00:15:35.739 "supported_io_types": { 00:15:35.739 "read": true, 00:15:35.739 "write": true, 00:15:35.739 "unmap": true, 00:15:35.739 "flush": true, 00:15:35.739 "reset": true, 00:15:35.739 "nvme_admin": false, 00:15:35.739 "nvme_io": false, 00:15:35.739 "nvme_io_md": false, 00:15:35.739 "write_zeroes": true, 00:15:35.739 "zcopy": true, 00:15:35.739 "get_zone_info": false, 00:15:35.739 "zone_management": false, 00:15:35.739 "zone_append": false, 00:15:35.739 "compare": false, 00:15:35.739 "compare_and_write": false, 00:15:35.739 "abort": true, 00:15:35.739 "seek_hole": false, 00:15:35.739 "seek_data": false, 00:15:35.739 "copy": true, 00:15:35.739 "nvme_iov_md": false 00:15:35.739 }, 00:15:35.739 "memory_domains": [ 00:15:35.739 { 00:15:35.739 "dma_device_id": "system", 00:15:35.739 "dma_device_type": 1 00:15:35.739 }, 00:15:35.739 { 00:15:35.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.739 "dma_device_type": 2 00:15:35.739 } 00:15:35.739 ], 00:15:35.739 "driver_specific": {} 00:15:35.739 } 00:15:35.739 ] 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.739 "name": "Existed_Raid", 00:15:35.739 "uuid": "f7cc662a-be3f-4657-8977-4c785fcb471a", 00:15:35.739 "strip_size_kb": 64, 00:15:35.739 "state": "configuring", 00:15:35.739 "raid_level": "raid5f", 00:15:35.739 "superblock": true, 00:15:35.739 "num_base_bdevs": 4, 00:15:35.739 "num_base_bdevs_discovered": 3, 00:15:35.739 "num_base_bdevs_operational": 4, 00:15:35.739 "base_bdevs_list": [ 00:15:35.739 { 00:15:35.739 "name": "BaseBdev1", 00:15:35.739 "uuid": "c9b43034-2fec-4f0b-9bbb-1a155562ea0a", 00:15:35.739 "is_configured": true, 00:15:35.739 "data_offset": 2048, 00:15:35.739 "data_size": 63488 00:15:35.739 }, 00:15:35.739 { 00:15:35.739 "name": "BaseBdev2", 00:15:35.739 "uuid": "8e2a365c-3e7d-4604-b739-0291c85e4f04", 00:15:35.739 "is_configured": true, 00:15:35.739 "data_offset": 2048, 00:15:35.739 "data_size": 63488 00:15:35.739 }, 00:15:35.739 { 00:15:35.739 "name": "BaseBdev3", 00:15:35.739 "uuid": "71c7365a-b685-46a8-8ab5-a68ec1184fe1", 00:15:35.739 "is_configured": true, 00:15:35.739 "data_offset": 2048, 00:15:35.739 "data_size": 63488 00:15:35.739 }, 00:15:35.739 { 00:15:35.739 "name": "BaseBdev4", 00:15:35.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.739 "is_configured": false, 00:15:35.739 "data_offset": 0, 00:15:35.739 "data_size": 0 00:15:35.739 } 00:15:35.739 ] 00:15:35.739 }' 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.739 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.999 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:35.999 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.999 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.259 [2024-11-21 03:24:23.563722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:36.259 [2024-11-21 03:24:23.563923] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:36.259 [2024-11-21 03:24:23.563943] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:36.259 [2024-11-21 03:24:23.564249] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:36.259 BaseBdev4 00:15:36.259 [2024-11-21 03:24:23.564738] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:36.259 [2024-11-21 03:24:23.564778] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:15:36.259 [2024-11-21 03:24:23.564905] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.259 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.259 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:36.259 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:36.259 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:36.259 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:36.259 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:36.259 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:36.259 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:36.259 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.259 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.259 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.259 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:36.259 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.259 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.259 [ 00:15:36.259 { 00:15:36.259 "name": "BaseBdev4", 00:15:36.259 "aliases": [ 00:15:36.259 "6e5a2544-4a8e-48bc-ad7b-8fdb446c9c51" 00:15:36.259 ], 00:15:36.259 "product_name": "Malloc disk", 00:15:36.259 "block_size": 512, 00:15:36.259 "num_blocks": 65536, 00:15:36.259 "uuid": "6e5a2544-4a8e-48bc-ad7b-8fdb446c9c51", 00:15:36.259 "assigned_rate_limits": { 00:15:36.259 "rw_ios_per_sec": 0, 00:15:36.259 "rw_mbytes_per_sec": 0, 00:15:36.259 "r_mbytes_per_sec": 0, 00:15:36.259 "w_mbytes_per_sec": 0 00:15:36.259 }, 00:15:36.259 "claimed": true, 00:15:36.259 "claim_type": "exclusive_write", 00:15:36.259 "zoned": false, 00:15:36.259 "supported_io_types": { 00:15:36.259 "read": true, 00:15:36.259 "write": true, 00:15:36.259 "unmap": true, 00:15:36.259 "flush": true, 00:15:36.259 "reset": true, 00:15:36.259 "nvme_admin": false, 00:15:36.259 "nvme_io": false, 00:15:36.259 "nvme_io_md": false, 00:15:36.259 "write_zeroes": true, 00:15:36.259 "zcopy": true, 00:15:36.259 "get_zone_info": false, 00:15:36.259 "zone_management": false, 00:15:36.259 "zone_append": false, 00:15:36.259 "compare": false, 00:15:36.259 "compare_and_write": false, 00:15:36.259 "abort": true, 00:15:36.259 "seek_hole": false, 00:15:36.259 "seek_data": false, 00:15:36.259 "copy": true, 00:15:36.259 "nvme_iov_md": false 00:15:36.259 }, 00:15:36.259 "memory_domains": [ 00:15:36.259 { 00:15:36.259 "dma_device_id": "system", 00:15:36.260 "dma_device_type": 1 00:15:36.260 }, 00:15:36.260 { 00:15:36.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.260 "dma_device_type": 2 00:15:36.260 } 00:15:36.260 ], 00:15:36.260 "driver_specific": {} 00:15:36.260 } 00:15:36.260 ] 00:15:36.260 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.260 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:36.260 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:36.260 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:36.260 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:36.260 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.260 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.260 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.260 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.260 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:36.260 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.260 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.260 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.260 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.260 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.260 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.260 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.260 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.260 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.260 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.260 "name": "Existed_Raid", 00:15:36.260 "uuid": "f7cc662a-be3f-4657-8977-4c785fcb471a", 00:15:36.260 "strip_size_kb": 64, 00:15:36.260 "state": "online", 00:15:36.260 "raid_level": "raid5f", 00:15:36.260 "superblock": true, 00:15:36.260 "num_base_bdevs": 4, 00:15:36.260 "num_base_bdevs_discovered": 4, 00:15:36.260 "num_base_bdevs_operational": 4, 00:15:36.260 "base_bdevs_list": [ 00:15:36.260 { 00:15:36.260 "name": "BaseBdev1", 00:15:36.260 "uuid": "c9b43034-2fec-4f0b-9bbb-1a155562ea0a", 00:15:36.260 "is_configured": true, 00:15:36.260 "data_offset": 2048, 00:15:36.260 "data_size": 63488 00:15:36.260 }, 00:15:36.260 { 00:15:36.260 "name": "BaseBdev2", 00:15:36.260 "uuid": "8e2a365c-3e7d-4604-b739-0291c85e4f04", 00:15:36.260 "is_configured": true, 00:15:36.260 "data_offset": 2048, 00:15:36.260 "data_size": 63488 00:15:36.260 }, 00:15:36.260 { 00:15:36.260 "name": "BaseBdev3", 00:15:36.260 "uuid": "71c7365a-b685-46a8-8ab5-a68ec1184fe1", 00:15:36.260 "is_configured": true, 00:15:36.260 "data_offset": 2048, 00:15:36.260 "data_size": 63488 00:15:36.260 }, 00:15:36.260 { 00:15:36.260 "name": "BaseBdev4", 00:15:36.260 "uuid": "6e5a2544-4a8e-48bc-ad7b-8fdb446c9c51", 00:15:36.260 "is_configured": true, 00:15:36.260 "data_offset": 2048, 00:15:36.260 "data_size": 63488 00:15:36.260 } 00:15:36.260 ] 00:15:36.260 }' 00:15:36.260 03:24:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.260 03:24:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.520 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:36.520 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:36.520 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:36.520 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:36.520 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:36.520 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:36.520 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:36.520 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:36.520 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.520 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.520 [2024-11-21 03:24:24.024070] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:36.520 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.520 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:36.520 "name": "Existed_Raid", 00:15:36.520 "aliases": [ 00:15:36.520 "f7cc662a-be3f-4657-8977-4c785fcb471a" 00:15:36.520 ], 00:15:36.520 "product_name": "Raid Volume", 00:15:36.520 "block_size": 512, 00:15:36.520 "num_blocks": 190464, 00:15:36.520 "uuid": "f7cc662a-be3f-4657-8977-4c785fcb471a", 00:15:36.520 "assigned_rate_limits": { 00:15:36.520 "rw_ios_per_sec": 0, 00:15:36.520 "rw_mbytes_per_sec": 0, 00:15:36.520 "r_mbytes_per_sec": 0, 00:15:36.520 "w_mbytes_per_sec": 0 00:15:36.520 }, 00:15:36.520 "claimed": false, 00:15:36.520 "zoned": false, 00:15:36.521 "supported_io_types": { 00:15:36.521 "read": true, 00:15:36.521 "write": true, 00:15:36.521 "unmap": false, 00:15:36.521 "flush": false, 00:15:36.521 "reset": true, 00:15:36.521 "nvme_admin": false, 00:15:36.521 "nvme_io": false, 00:15:36.521 "nvme_io_md": false, 00:15:36.521 "write_zeroes": true, 00:15:36.521 "zcopy": false, 00:15:36.521 "get_zone_info": false, 00:15:36.521 "zone_management": false, 00:15:36.521 "zone_append": false, 00:15:36.521 "compare": false, 00:15:36.521 "compare_and_write": false, 00:15:36.521 "abort": false, 00:15:36.521 "seek_hole": false, 00:15:36.521 "seek_data": false, 00:15:36.521 "copy": false, 00:15:36.521 "nvme_iov_md": false 00:15:36.521 }, 00:15:36.521 "driver_specific": { 00:15:36.521 "raid": { 00:15:36.521 "uuid": "f7cc662a-be3f-4657-8977-4c785fcb471a", 00:15:36.521 "strip_size_kb": 64, 00:15:36.521 "state": "online", 00:15:36.521 "raid_level": "raid5f", 00:15:36.521 "superblock": true, 00:15:36.521 "num_base_bdevs": 4, 00:15:36.521 "num_base_bdevs_discovered": 4, 00:15:36.521 "num_base_bdevs_operational": 4, 00:15:36.521 "base_bdevs_list": [ 00:15:36.521 { 00:15:36.521 "name": "BaseBdev1", 00:15:36.521 "uuid": "c9b43034-2fec-4f0b-9bbb-1a155562ea0a", 00:15:36.521 "is_configured": true, 00:15:36.521 "data_offset": 2048, 00:15:36.521 "data_size": 63488 00:15:36.521 }, 00:15:36.521 { 00:15:36.521 "name": "BaseBdev2", 00:15:36.521 "uuid": "8e2a365c-3e7d-4604-b739-0291c85e4f04", 00:15:36.521 "is_configured": true, 00:15:36.521 "data_offset": 2048, 00:15:36.521 "data_size": 63488 00:15:36.521 }, 00:15:36.521 { 00:15:36.521 "name": "BaseBdev3", 00:15:36.521 "uuid": "71c7365a-b685-46a8-8ab5-a68ec1184fe1", 00:15:36.521 "is_configured": true, 00:15:36.521 "data_offset": 2048, 00:15:36.521 "data_size": 63488 00:15:36.521 }, 00:15:36.521 { 00:15:36.521 "name": "BaseBdev4", 00:15:36.521 "uuid": "6e5a2544-4a8e-48bc-ad7b-8fdb446c9c51", 00:15:36.521 "is_configured": true, 00:15:36.521 "data_offset": 2048, 00:15:36.521 "data_size": 63488 00:15:36.521 } 00:15:36.521 ] 00:15:36.521 } 00:15:36.521 } 00:15:36.521 }' 00:15:36.521 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:36.781 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:36.781 BaseBdev2 00:15:36.781 BaseBdev3 00:15:36.781 BaseBdev4' 00:15:36.781 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.781 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:36.781 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.781 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:36.781 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.781 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.781 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.781 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.781 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.781 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.781 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.781 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:36.781 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.781 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.781 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.781 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.781 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.781 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.781 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.781 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.781 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:36.781 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.781 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.781 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.781 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.781 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.781 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.781 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.781 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:36.781 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.781 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.782 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.782 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.782 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.782 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:36.782 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.782 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.782 [2024-11-21 03:24:24.340028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:37.042 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.042 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:37.042 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:37.042 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:37.042 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:37.042 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:37.042 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:37.042 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.042 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.042 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.042 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.042 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.042 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.042 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.042 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.042 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.042 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.042 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.042 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.042 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.042 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.042 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.042 "name": "Existed_Raid", 00:15:37.042 "uuid": "f7cc662a-be3f-4657-8977-4c785fcb471a", 00:15:37.042 "strip_size_kb": 64, 00:15:37.042 "state": "online", 00:15:37.042 "raid_level": "raid5f", 00:15:37.042 "superblock": true, 00:15:37.042 "num_base_bdevs": 4, 00:15:37.042 "num_base_bdevs_discovered": 3, 00:15:37.042 "num_base_bdevs_operational": 3, 00:15:37.042 "base_bdevs_list": [ 00:15:37.042 { 00:15:37.042 "name": null, 00:15:37.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.042 "is_configured": false, 00:15:37.042 "data_offset": 0, 00:15:37.042 "data_size": 63488 00:15:37.042 }, 00:15:37.042 { 00:15:37.042 "name": "BaseBdev2", 00:15:37.042 "uuid": "8e2a365c-3e7d-4604-b739-0291c85e4f04", 00:15:37.042 "is_configured": true, 00:15:37.042 "data_offset": 2048, 00:15:37.042 "data_size": 63488 00:15:37.042 }, 00:15:37.042 { 00:15:37.042 "name": "BaseBdev3", 00:15:37.042 "uuid": "71c7365a-b685-46a8-8ab5-a68ec1184fe1", 00:15:37.042 "is_configured": true, 00:15:37.042 "data_offset": 2048, 00:15:37.042 "data_size": 63488 00:15:37.042 }, 00:15:37.042 { 00:15:37.042 "name": "BaseBdev4", 00:15:37.042 "uuid": "6e5a2544-4a8e-48bc-ad7b-8fdb446c9c51", 00:15:37.042 "is_configured": true, 00:15:37.042 "data_offset": 2048, 00:15:37.042 "data_size": 63488 00:15:37.042 } 00:15:37.042 ] 00:15:37.042 }' 00:15:37.042 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.042 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.302 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:37.302 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:37.302 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.302 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.302 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.302 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:37.302 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.302 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:37.302 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:37.302 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:37.302 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.302 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.302 [2024-11-21 03:24:24.855730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:37.302 [2024-11-21 03:24:24.855927] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:37.563 [2024-11-21 03:24:24.867224] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.563 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.563 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:37.563 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:37.563 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:37.563 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.563 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.563 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.563 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.563 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:37.563 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:37.563 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:37.563 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.563 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.563 [2024-11-21 03:24:24.915249] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:37.563 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.563 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:37.563 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:37.563 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.563 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.563 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:37.563 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.563 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.563 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:37.563 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:37.563 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:37.563 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.563 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.563 [2024-11-21 03:24:24.986631] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:37.563 [2024-11-21 03:24:24.986675] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:15:37.563 03:24:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.563 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:37.563 03:24:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:37.563 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:37.563 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.563 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.563 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.563 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.563 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:37.563 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:37.563 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:37.563 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:37.563 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:37.563 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:37.563 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.563 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.563 BaseBdev2 00:15:37.563 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.563 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:37.563 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:37.563 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:37.563 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:37.563 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:37.563 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:37.563 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:37.563 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.563 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.563 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.563 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:37.563 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.563 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.563 [ 00:15:37.563 { 00:15:37.563 "name": "BaseBdev2", 00:15:37.563 "aliases": [ 00:15:37.563 "a53f6ad1-5ce6-4ea2-bcb2-864e55a62b89" 00:15:37.563 ], 00:15:37.563 "product_name": "Malloc disk", 00:15:37.563 "block_size": 512, 00:15:37.563 "num_blocks": 65536, 00:15:37.563 "uuid": "a53f6ad1-5ce6-4ea2-bcb2-864e55a62b89", 00:15:37.563 "assigned_rate_limits": { 00:15:37.563 "rw_ios_per_sec": 0, 00:15:37.563 "rw_mbytes_per_sec": 0, 00:15:37.563 "r_mbytes_per_sec": 0, 00:15:37.563 "w_mbytes_per_sec": 0 00:15:37.563 }, 00:15:37.563 "claimed": false, 00:15:37.563 "zoned": false, 00:15:37.563 "supported_io_types": { 00:15:37.563 "read": true, 00:15:37.563 "write": true, 00:15:37.563 "unmap": true, 00:15:37.563 "flush": true, 00:15:37.563 "reset": true, 00:15:37.563 "nvme_admin": false, 00:15:37.563 "nvme_io": false, 00:15:37.563 "nvme_io_md": false, 00:15:37.563 "write_zeroes": true, 00:15:37.563 "zcopy": true, 00:15:37.563 "get_zone_info": false, 00:15:37.563 "zone_management": false, 00:15:37.563 "zone_append": false, 00:15:37.563 "compare": false, 00:15:37.563 "compare_and_write": false, 00:15:37.563 "abort": true, 00:15:37.563 "seek_hole": false, 00:15:37.563 "seek_data": false, 00:15:37.563 "copy": true, 00:15:37.563 "nvme_iov_md": false 00:15:37.563 }, 00:15:37.563 "memory_domains": [ 00:15:37.563 { 00:15:37.563 "dma_device_id": "system", 00:15:37.563 "dma_device_type": 1 00:15:37.563 }, 00:15:37.563 { 00:15:37.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.564 "dma_device_type": 2 00:15:37.564 } 00:15:37.564 ], 00:15:37.564 "driver_specific": {} 00:15:37.564 } 00:15:37.564 ] 00:15:37.564 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.564 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:37.564 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:37.564 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:37.564 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:37.564 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.564 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.564 BaseBdev3 00:15:37.564 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.564 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:37.564 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:37.564 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:37.564 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:37.564 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:37.564 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:37.564 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:37.564 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.564 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.564 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.564 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:37.564 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.564 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.824 [ 00:15:37.824 { 00:15:37.824 "name": "BaseBdev3", 00:15:37.824 "aliases": [ 00:15:37.824 "85bdd52f-c568-4645-b515-ca0911ff224a" 00:15:37.824 ], 00:15:37.824 "product_name": "Malloc disk", 00:15:37.824 "block_size": 512, 00:15:37.824 "num_blocks": 65536, 00:15:37.824 "uuid": "85bdd52f-c568-4645-b515-ca0911ff224a", 00:15:37.824 "assigned_rate_limits": { 00:15:37.824 "rw_ios_per_sec": 0, 00:15:37.824 "rw_mbytes_per_sec": 0, 00:15:37.824 "r_mbytes_per_sec": 0, 00:15:37.824 "w_mbytes_per_sec": 0 00:15:37.824 }, 00:15:37.824 "claimed": false, 00:15:37.824 "zoned": false, 00:15:37.824 "supported_io_types": { 00:15:37.824 "read": true, 00:15:37.824 "write": true, 00:15:37.824 "unmap": true, 00:15:37.824 "flush": true, 00:15:37.824 "reset": true, 00:15:37.824 "nvme_admin": false, 00:15:37.824 "nvme_io": false, 00:15:37.824 "nvme_io_md": false, 00:15:37.824 "write_zeroes": true, 00:15:37.824 "zcopy": true, 00:15:37.824 "get_zone_info": false, 00:15:37.824 "zone_management": false, 00:15:37.824 "zone_append": false, 00:15:37.824 "compare": false, 00:15:37.824 "compare_and_write": false, 00:15:37.824 "abort": true, 00:15:37.824 "seek_hole": false, 00:15:37.824 "seek_data": false, 00:15:37.824 "copy": true, 00:15:37.824 "nvme_iov_md": false 00:15:37.824 }, 00:15:37.824 "memory_domains": [ 00:15:37.824 { 00:15:37.824 "dma_device_id": "system", 00:15:37.824 "dma_device_type": 1 00:15:37.824 }, 00:15:37.824 { 00:15:37.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.824 "dma_device_type": 2 00:15:37.824 } 00:15:37.824 ], 00:15:37.824 "driver_specific": {} 00:15:37.824 } 00:15:37.824 ] 00:15:37.824 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.824 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:37.824 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:37.824 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:37.824 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:37.824 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.824 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.824 BaseBdev4 00:15:37.824 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.824 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:37.824 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:37.824 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:37.824 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.825 [ 00:15:37.825 { 00:15:37.825 "name": "BaseBdev4", 00:15:37.825 "aliases": [ 00:15:37.825 "63307ce3-9905-44ab-aa9c-92598dc12de1" 00:15:37.825 ], 00:15:37.825 "product_name": "Malloc disk", 00:15:37.825 "block_size": 512, 00:15:37.825 "num_blocks": 65536, 00:15:37.825 "uuid": "63307ce3-9905-44ab-aa9c-92598dc12de1", 00:15:37.825 "assigned_rate_limits": { 00:15:37.825 "rw_ios_per_sec": 0, 00:15:37.825 "rw_mbytes_per_sec": 0, 00:15:37.825 "r_mbytes_per_sec": 0, 00:15:37.825 "w_mbytes_per_sec": 0 00:15:37.825 }, 00:15:37.825 "claimed": false, 00:15:37.825 "zoned": false, 00:15:37.825 "supported_io_types": { 00:15:37.825 "read": true, 00:15:37.825 "write": true, 00:15:37.825 "unmap": true, 00:15:37.825 "flush": true, 00:15:37.825 "reset": true, 00:15:37.825 "nvme_admin": false, 00:15:37.825 "nvme_io": false, 00:15:37.825 "nvme_io_md": false, 00:15:37.825 "write_zeroes": true, 00:15:37.825 "zcopy": true, 00:15:37.825 "get_zone_info": false, 00:15:37.825 "zone_management": false, 00:15:37.825 "zone_append": false, 00:15:37.825 "compare": false, 00:15:37.825 "compare_and_write": false, 00:15:37.825 "abort": true, 00:15:37.825 "seek_hole": false, 00:15:37.825 "seek_data": false, 00:15:37.825 "copy": true, 00:15:37.825 "nvme_iov_md": false 00:15:37.825 }, 00:15:37.825 "memory_domains": [ 00:15:37.825 { 00:15:37.825 "dma_device_id": "system", 00:15:37.825 "dma_device_type": 1 00:15:37.825 }, 00:15:37.825 { 00:15:37.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.825 "dma_device_type": 2 00:15:37.825 } 00:15:37.825 ], 00:15:37.825 "driver_specific": {} 00:15:37.825 } 00:15:37.825 ] 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.825 [2024-11-21 03:24:25.206269] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:37.825 [2024-11-21 03:24:25.206393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:37.825 [2024-11-21 03:24:25.206431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:37.825 [2024-11-21 03:24:25.208205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:37.825 [2024-11-21 03:24:25.208309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.825 "name": "Existed_Raid", 00:15:37.825 "uuid": "9917f493-0f29-46db-9a88-22b21f39f591", 00:15:37.825 "strip_size_kb": 64, 00:15:37.825 "state": "configuring", 00:15:37.825 "raid_level": "raid5f", 00:15:37.825 "superblock": true, 00:15:37.825 "num_base_bdevs": 4, 00:15:37.825 "num_base_bdevs_discovered": 3, 00:15:37.825 "num_base_bdevs_operational": 4, 00:15:37.825 "base_bdevs_list": [ 00:15:37.825 { 00:15:37.825 "name": "BaseBdev1", 00:15:37.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.825 "is_configured": false, 00:15:37.825 "data_offset": 0, 00:15:37.825 "data_size": 0 00:15:37.825 }, 00:15:37.825 { 00:15:37.825 "name": "BaseBdev2", 00:15:37.825 "uuid": "a53f6ad1-5ce6-4ea2-bcb2-864e55a62b89", 00:15:37.825 "is_configured": true, 00:15:37.825 "data_offset": 2048, 00:15:37.825 "data_size": 63488 00:15:37.825 }, 00:15:37.825 { 00:15:37.825 "name": "BaseBdev3", 00:15:37.825 "uuid": "85bdd52f-c568-4645-b515-ca0911ff224a", 00:15:37.825 "is_configured": true, 00:15:37.825 "data_offset": 2048, 00:15:37.825 "data_size": 63488 00:15:37.825 }, 00:15:37.825 { 00:15:37.825 "name": "BaseBdev4", 00:15:37.825 "uuid": "63307ce3-9905-44ab-aa9c-92598dc12de1", 00:15:37.825 "is_configured": true, 00:15:37.825 "data_offset": 2048, 00:15:37.825 "data_size": 63488 00:15:37.825 } 00:15:37.825 ] 00:15:37.825 }' 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.825 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.395 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:38.395 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.395 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.395 [2024-11-21 03:24:25.682340] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:38.395 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.395 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:38.395 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.395 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.395 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.395 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.395 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:38.395 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.395 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.395 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.395 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.395 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.395 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.395 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.395 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.395 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.395 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.395 "name": "Existed_Raid", 00:15:38.395 "uuid": "9917f493-0f29-46db-9a88-22b21f39f591", 00:15:38.395 "strip_size_kb": 64, 00:15:38.395 "state": "configuring", 00:15:38.395 "raid_level": "raid5f", 00:15:38.395 "superblock": true, 00:15:38.395 "num_base_bdevs": 4, 00:15:38.395 "num_base_bdevs_discovered": 2, 00:15:38.395 "num_base_bdevs_operational": 4, 00:15:38.395 "base_bdevs_list": [ 00:15:38.395 { 00:15:38.395 "name": "BaseBdev1", 00:15:38.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.395 "is_configured": false, 00:15:38.395 "data_offset": 0, 00:15:38.395 "data_size": 0 00:15:38.395 }, 00:15:38.395 { 00:15:38.395 "name": null, 00:15:38.395 "uuid": "a53f6ad1-5ce6-4ea2-bcb2-864e55a62b89", 00:15:38.395 "is_configured": false, 00:15:38.395 "data_offset": 0, 00:15:38.395 "data_size": 63488 00:15:38.395 }, 00:15:38.395 { 00:15:38.395 "name": "BaseBdev3", 00:15:38.395 "uuid": "85bdd52f-c568-4645-b515-ca0911ff224a", 00:15:38.396 "is_configured": true, 00:15:38.396 "data_offset": 2048, 00:15:38.396 "data_size": 63488 00:15:38.396 }, 00:15:38.396 { 00:15:38.396 "name": "BaseBdev4", 00:15:38.396 "uuid": "63307ce3-9905-44ab-aa9c-92598dc12de1", 00:15:38.396 "is_configured": true, 00:15:38.396 "data_offset": 2048, 00:15:38.396 "data_size": 63488 00:15:38.396 } 00:15:38.396 ] 00:15:38.396 }' 00:15:38.396 03:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.396 03:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.655 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.655 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:38.655 03:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.655 03:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.655 03:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.655 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:38.655 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:38.655 03:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.655 03:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.655 [2024-11-21 03:24:26.189562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:38.655 BaseBdev1 00:15:38.655 03:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.655 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:38.655 03:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:38.655 03:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:38.655 03:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:38.655 03:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:38.655 03:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:38.655 03:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:38.655 03:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.655 03:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.655 03:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.655 03:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:38.655 03:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.655 03:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.655 [ 00:15:38.655 { 00:15:38.655 "name": "BaseBdev1", 00:15:38.655 "aliases": [ 00:15:38.655 "3e1aa298-7472-47cf-b003-07cdcf401f63" 00:15:38.655 ], 00:15:38.655 "product_name": "Malloc disk", 00:15:38.655 "block_size": 512, 00:15:38.655 "num_blocks": 65536, 00:15:38.655 "uuid": "3e1aa298-7472-47cf-b003-07cdcf401f63", 00:15:38.655 "assigned_rate_limits": { 00:15:38.655 "rw_ios_per_sec": 0, 00:15:38.655 "rw_mbytes_per_sec": 0, 00:15:38.914 "r_mbytes_per_sec": 0, 00:15:38.914 "w_mbytes_per_sec": 0 00:15:38.914 }, 00:15:38.914 "claimed": true, 00:15:38.914 "claim_type": "exclusive_write", 00:15:38.914 "zoned": false, 00:15:38.914 "supported_io_types": { 00:15:38.914 "read": true, 00:15:38.914 "write": true, 00:15:38.914 "unmap": true, 00:15:38.914 "flush": true, 00:15:38.914 "reset": true, 00:15:38.914 "nvme_admin": false, 00:15:38.914 "nvme_io": false, 00:15:38.914 "nvme_io_md": false, 00:15:38.914 "write_zeroes": true, 00:15:38.914 "zcopy": true, 00:15:38.914 "get_zone_info": false, 00:15:38.914 "zone_management": false, 00:15:38.914 "zone_append": false, 00:15:38.914 "compare": false, 00:15:38.914 "compare_and_write": false, 00:15:38.914 "abort": true, 00:15:38.914 "seek_hole": false, 00:15:38.914 "seek_data": false, 00:15:38.914 "copy": true, 00:15:38.914 "nvme_iov_md": false 00:15:38.914 }, 00:15:38.914 "memory_domains": [ 00:15:38.914 { 00:15:38.914 "dma_device_id": "system", 00:15:38.914 "dma_device_type": 1 00:15:38.914 }, 00:15:38.914 { 00:15:38.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.914 "dma_device_type": 2 00:15:38.914 } 00:15:38.914 ], 00:15:38.914 "driver_specific": {} 00:15:38.914 } 00:15:38.914 ] 00:15:38.914 03:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.914 03:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:38.914 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:38.914 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.914 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.914 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.915 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.915 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:38.915 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.915 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.915 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.915 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.915 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.915 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.915 03:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.915 03:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.915 03:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.915 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.915 "name": "Existed_Raid", 00:15:38.915 "uuid": "9917f493-0f29-46db-9a88-22b21f39f591", 00:15:38.915 "strip_size_kb": 64, 00:15:38.915 "state": "configuring", 00:15:38.915 "raid_level": "raid5f", 00:15:38.915 "superblock": true, 00:15:38.915 "num_base_bdevs": 4, 00:15:38.915 "num_base_bdevs_discovered": 3, 00:15:38.915 "num_base_bdevs_operational": 4, 00:15:38.915 "base_bdevs_list": [ 00:15:38.915 { 00:15:38.915 "name": "BaseBdev1", 00:15:38.915 "uuid": "3e1aa298-7472-47cf-b003-07cdcf401f63", 00:15:38.915 "is_configured": true, 00:15:38.915 "data_offset": 2048, 00:15:38.915 "data_size": 63488 00:15:38.915 }, 00:15:38.915 { 00:15:38.915 "name": null, 00:15:38.915 "uuid": "a53f6ad1-5ce6-4ea2-bcb2-864e55a62b89", 00:15:38.915 "is_configured": false, 00:15:38.915 "data_offset": 0, 00:15:38.915 "data_size": 63488 00:15:38.915 }, 00:15:38.915 { 00:15:38.915 "name": "BaseBdev3", 00:15:38.915 "uuid": "85bdd52f-c568-4645-b515-ca0911ff224a", 00:15:38.915 "is_configured": true, 00:15:38.915 "data_offset": 2048, 00:15:38.915 "data_size": 63488 00:15:38.915 }, 00:15:38.915 { 00:15:38.915 "name": "BaseBdev4", 00:15:38.915 "uuid": "63307ce3-9905-44ab-aa9c-92598dc12de1", 00:15:38.915 "is_configured": true, 00:15:38.915 "data_offset": 2048, 00:15:38.915 "data_size": 63488 00:15:38.915 } 00:15:38.915 ] 00:15:38.915 }' 00:15:38.915 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.915 03:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.175 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:39.175 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.175 03:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.175 03:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.175 03:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.175 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:39.175 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:39.175 03:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.175 03:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.175 [2024-11-21 03:24:26.705731] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:39.175 03:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.175 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:39.175 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.175 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.175 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.175 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.175 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:39.175 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.175 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.175 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.175 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.175 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.175 03:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.175 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.175 03:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.434 03:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.434 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.434 "name": "Existed_Raid", 00:15:39.434 "uuid": "9917f493-0f29-46db-9a88-22b21f39f591", 00:15:39.434 "strip_size_kb": 64, 00:15:39.434 "state": "configuring", 00:15:39.434 "raid_level": "raid5f", 00:15:39.434 "superblock": true, 00:15:39.434 "num_base_bdevs": 4, 00:15:39.434 "num_base_bdevs_discovered": 2, 00:15:39.435 "num_base_bdevs_operational": 4, 00:15:39.435 "base_bdevs_list": [ 00:15:39.435 { 00:15:39.435 "name": "BaseBdev1", 00:15:39.435 "uuid": "3e1aa298-7472-47cf-b003-07cdcf401f63", 00:15:39.435 "is_configured": true, 00:15:39.435 "data_offset": 2048, 00:15:39.435 "data_size": 63488 00:15:39.435 }, 00:15:39.435 { 00:15:39.435 "name": null, 00:15:39.435 "uuid": "a53f6ad1-5ce6-4ea2-bcb2-864e55a62b89", 00:15:39.435 "is_configured": false, 00:15:39.435 "data_offset": 0, 00:15:39.435 "data_size": 63488 00:15:39.435 }, 00:15:39.435 { 00:15:39.435 "name": null, 00:15:39.435 "uuid": "85bdd52f-c568-4645-b515-ca0911ff224a", 00:15:39.435 "is_configured": false, 00:15:39.435 "data_offset": 0, 00:15:39.435 "data_size": 63488 00:15:39.435 }, 00:15:39.435 { 00:15:39.435 "name": "BaseBdev4", 00:15:39.435 "uuid": "63307ce3-9905-44ab-aa9c-92598dc12de1", 00:15:39.435 "is_configured": true, 00:15:39.435 "data_offset": 2048, 00:15:39.435 "data_size": 63488 00:15:39.435 } 00:15:39.435 ] 00:15:39.435 }' 00:15:39.435 03:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.435 03:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.694 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.694 03:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.694 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:39.694 03:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.694 03:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.694 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:39.694 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:39.694 03:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.694 03:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.694 [2024-11-21 03:24:27.237909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:39.694 03:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.694 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:39.694 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.694 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.694 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.694 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.694 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:39.694 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.694 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.694 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.694 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.694 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.694 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.694 03:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.694 03:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.954 03:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.954 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.954 "name": "Existed_Raid", 00:15:39.954 "uuid": "9917f493-0f29-46db-9a88-22b21f39f591", 00:15:39.954 "strip_size_kb": 64, 00:15:39.954 "state": "configuring", 00:15:39.954 "raid_level": "raid5f", 00:15:39.954 "superblock": true, 00:15:39.954 "num_base_bdevs": 4, 00:15:39.954 "num_base_bdevs_discovered": 3, 00:15:39.954 "num_base_bdevs_operational": 4, 00:15:39.954 "base_bdevs_list": [ 00:15:39.954 { 00:15:39.954 "name": "BaseBdev1", 00:15:39.954 "uuid": "3e1aa298-7472-47cf-b003-07cdcf401f63", 00:15:39.954 "is_configured": true, 00:15:39.954 "data_offset": 2048, 00:15:39.954 "data_size": 63488 00:15:39.954 }, 00:15:39.954 { 00:15:39.954 "name": null, 00:15:39.954 "uuid": "a53f6ad1-5ce6-4ea2-bcb2-864e55a62b89", 00:15:39.954 "is_configured": false, 00:15:39.954 "data_offset": 0, 00:15:39.954 "data_size": 63488 00:15:39.954 }, 00:15:39.954 { 00:15:39.954 "name": "BaseBdev3", 00:15:39.954 "uuid": "85bdd52f-c568-4645-b515-ca0911ff224a", 00:15:39.954 "is_configured": true, 00:15:39.954 "data_offset": 2048, 00:15:39.954 "data_size": 63488 00:15:39.954 }, 00:15:39.954 { 00:15:39.954 "name": "BaseBdev4", 00:15:39.954 "uuid": "63307ce3-9905-44ab-aa9c-92598dc12de1", 00:15:39.954 "is_configured": true, 00:15:39.954 "data_offset": 2048, 00:15:39.954 "data_size": 63488 00:15:39.954 } 00:15:39.954 ] 00:15:39.954 }' 00:15:39.954 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.954 03:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.214 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.214 03:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.214 03:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.214 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:40.214 03:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.475 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:40.475 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:40.475 03:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.475 03:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.475 [2024-11-21 03:24:27.786083] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:40.475 03:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.475 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:40.475 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.475 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.475 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.475 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.475 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:40.475 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.475 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.475 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.475 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.475 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.475 03:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.475 03:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.475 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.475 03:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.475 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.475 "name": "Existed_Raid", 00:15:40.475 "uuid": "9917f493-0f29-46db-9a88-22b21f39f591", 00:15:40.475 "strip_size_kb": 64, 00:15:40.475 "state": "configuring", 00:15:40.475 "raid_level": "raid5f", 00:15:40.475 "superblock": true, 00:15:40.475 "num_base_bdevs": 4, 00:15:40.475 "num_base_bdevs_discovered": 2, 00:15:40.475 "num_base_bdevs_operational": 4, 00:15:40.475 "base_bdevs_list": [ 00:15:40.475 { 00:15:40.475 "name": null, 00:15:40.475 "uuid": "3e1aa298-7472-47cf-b003-07cdcf401f63", 00:15:40.475 "is_configured": false, 00:15:40.475 "data_offset": 0, 00:15:40.475 "data_size": 63488 00:15:40.475 }, 00:15:40.475 { 00:15:40.475 "name": null, 00:15:40.475 "uuid": "a53f6ad1-5ce6-4ea2-bcb2-864e55a62b89", 00:15:40.475 "is_configured": false, 00:15:40.475 "data_offset": 0, 00:15:40.475 "data_size": 63488 00:15:40.475 }, 00:15:40.475 { 00:15:40.475 "name": "BaseBdev3", 00:15:40.475 "uuid": "85bdd52f-c568-4645-b515-ca0911ff224a", 00:15:40.475 "is_configured": true, 00:15:40.475 "data_offset": 2048, 00:15:40.475 "data_size": 63488 00:15:40.475 }, 00:15:40.475 { 00:15:40.475 "name": "BaseBdev4", 00:15:40.475 "uuid": "63307ce3-9905-44ab-aa9c-92598dc12de1", 00:15:40.475 "is_configured": true, 00:15:40.475 "data_offset": 2048, 00:15:40.475 "data_size": 63488 00:15:40.475 } 00:15:40.475 ] 00:15:40.475 }' 00:15:40.475 03:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.475 03:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.735 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.735 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:40.735 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.735 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.735 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.995 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:40.995 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:40.995 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.995 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.995 [2024-11-21 03:24:28.312838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:40.995 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.995 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:40.995 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.995 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.995 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.995 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.995 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:40.995 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.995 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.995 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.995 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.995 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.995 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.995 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.995 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.995 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.995 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.995 "name": "Existed_Raid", 00:15:40.995 "uuid": "9917f493-0f29-46db-9a88-22b21f39f591", 00:15:40.995 "strip_size_kb": 64, 00:15:40.995 "state": "configuring", 00:15:40.996 "raid_level": "raid5f", 00:15:40.996 "superblock": true, 00:15:40.996 "num_base_bdevs": 4, 00:15:40.996 "num_base_bdevs_discovered": 3, 00:15:40.996 "num_base_bdevs_operational": 4, 00:15:40.996 "base_bdevs_list": [ 00:15:40.996 { 00:15:40.996 "name": null, 00:15:40.996 "uuid": "3e1aa298-7472-47cf-b003-07cdcf401f63", 00:15:40.996 "is_configured": false, 00:15:40.996 "data_offset": 0, 00:15:40.996 "data_size": 63488 00:15:40.996 }, 00:15:40.996 { 00:15:40.996 "name": "BaseBdev2", 00:15:40.996 "uuid": "a53f6ad1-5ce6-4ea2-bcb2-864e55a62b89", 00:15:40.996 "is_configured": true, 00:15:40.996 "data_offset": 2048, 00:15:40.996 "data_size": 63488 00:15:40.996 }, 00:15:40.996 { 00:15:40.996 "name": "BaseBdev3", 00:15:40.996 "uuid": "85bdd52f-c568-4645-b515-ca0911ff224a", 00:15:40.996 "is_configured": true, 00:15:40.996 "data_offset": 2048, 00:15:40.996 "data_size": 63488 00:15:40.996 }, 00:15:40.996 { 00:15:40.996 "name": "BaseBdev4", 00:15:40.996 "uuid": "63307ce3-9905-44ab-aa9c-92598dc12de1", 00:15:40.996 "is_configured": true, 00:15:40.996 "data_offset": 2048, 00:15:40.996 "data_size": 63488 00:15:40.996 } 00:15:40.996 ] 00:15:40.996 }' 00:15:40.996 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.996 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.255 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.255 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:41.255 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.255 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.255 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.255 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:41.255 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.255 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.255 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:41.255 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3e1aa298-7472-47cf-b003-07cdcf401f63 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.516 [2024-11-21 03:24:28.863855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:41.516 [2024-11-21 03:24:28.864064] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:41.516 [2024-11-21 03:24:28.864089] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:41.516 [2024-11-21 03:24:28.864322] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:15:41.516 NewBaseBdev 00:15:41.516 [2024-11-21 03:24:28.864807] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:41.516 [2024-11-21 03:24:28.864824] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:41.516 [2024-11-21 03:24:28.864926] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.516 [ 00:15:41.516 { 00:15:41.516 "name": "NewBaseBdev", 00:15:41.516 "aliases": [ 00:15:41.516 "3e1aa298-7472-47cf-b003-07cdcf401f63" 00:15:41.516 ], 00:15:41.516 "product_name": "Malloc disk", 00:15:41.516 "block_size": 512, 00:15:41.516 "num_blocks": 65536, 00:15:41.516 "uuid": "3e1aa298-7472-47cf-b003-07cdcf401f63", 00:15:41.516 "assigned_rate_limits": { 00:15:41.516 "rw_ios_per_sec": 0, 00:15:41.516 "rw_mbytes_per_sec": 0, 00:15:41.516 "r_mbytes_per_sec": 0, 00:15:41.516 "w_mbytes_per_sec": 0 00:15:41.516 }, 00:15:41.516 "claimed": true, 00:15:41.516 "claim_type": "exclusive_write", 00:15:41.516 "zoned": false, 00:15:41.516 "supported_io_types": { 00:15:41.516 "read": true, 00:15:41.516 "write": true, 00:15:41.516 "unmap": true, 00:15:41.516 "flush": true, 00:15:41.516 "reset": true, 00:15:41.516 "nvme_admin": false, 00:15:41.516 "nvme_io": false, 00:15:41.516 "nvme_io_md": false, 00:15:41.516 "write_zeroes": true, 00:15:41.516 "zcopy": true, 00:15:41.516 "get_zone_info": false, 00:15:41.516 "zone_management": false, 00:15:41.516 "zone_append": false, 00:15:41.516 "compare": false, 00:15:41.516 "compare_and_write": false, 00:15:41.516 "abort": true, 00:15:41.516 "seek_hole": false, 00:15:41.516 "seek_data": false, 00:15:41.516 "copy": true, 00:15:41.516 "nvme_iov_md": false 00:15:41.516 }, 00:15:41.516 "memory_domains": [ 00:15:41.516 { 00:15:41.516 "dma_device_id": "system", 00:15:41.516 "dma_device_type": 1 00:15:41.516 }, 00:15:41.516 { 00:15:41.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.516 "dma_device_type": 2 00:15:41.516 } 00:15:41.516 ], 00:15:41.516 "driver_specific": {} 00:15:41.516 } 00:15:41.516 ] 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.516 "name": "Existed_Raid", 00:15:41.516 "uuid": "9917f493-0f29-46db-9a88-22b21f39f591", 00:15:41.516 "strip_size_kb": 64, 00:15:41.516 "state": "online", 00:15:41.516 "raid_level": "raid5f", 00:15:41.516 "superblock": true, 00:15:41.516 "num_base_bdevs": 4, 00:15:41.516 "num_base_bdevs_discovered": 4, 00:15:41.516 "num_base_bdevs_operational": 4, 00:15:41.516 "base_bdevs_list": [ 00:15:41.516 { 00:15:41.516 "name": "NewBaseBdev", 00:15:41.516 "uuid": "3e1aa298-7472-47cf-b003-07cdcf401f63", 00:15:41.516 "is_configured": true, 00:15:41.516 "data_offset": 2048, 00:15:41.516 "data_size": 63488 00:15:41.516 }, 00:15:41.516 { 00:15:41.516 "name": "BaseBdev2", 00:15:41.516 "uuid": "a53f6ad1-5ce6-4ea2-bcb2-864e55a62b89", 00:15:41.516 "is_configured": true, 00:15:41.516 "data_offset": 2048, 00:15:41.516 "data_size": 63488 00:15:41.516 }, 00:15:41.516 { 00:15:41.516 "name": "BaseBdev3", 00:15:41.516 "uuid": "85bdd52f-c568-4645-b515-ca0911ff224a", 00:15:41.516 "is_configured": true, 00:15:41.516 "data_offset": 2048, 00:15:41.516 "data_size": 63488 00:15:41.516 }, 00:15:41.516 { 00:15:41.516 "name": "BaseBdev4", 00:15:41.516 "uuid": "63307ce3-9905-44ab-aa9c-92598dc12de1", 00:15:41.516 "is_configured": true, 00:15:41.516 "data_offset": 2048, 00:15:41.516 "data_size": 63488 00:15:41.516 } 00:15:41.516 ] 00:15:41.516 }' 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.516 03:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.780 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:41.780 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:41.780 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:41.780 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:41.780 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:41.780 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:41.780 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:41.780 03:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.780 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:41.780 03:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.780 [2024-11-21 03:24:29.332195] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:42.040 "name": "Existed_Raid", 00:15:42.040 "aliases": [ 00:15:42.040 "9917f493-0f29-46db-9a88-22b21f39f591" 00:15:42.040 ], 00:15:42.040 "product_name": "Raid Volume", 00:15:42.040 "block_size": 512, 00:15:42.040 "num_blocks": 190464, 00:15:42.040 "uuid": "9917f493-0f29-46db-9a88-22b21f39f591", 00:15:42.040 "assigned_rate_limits": { 00:15:42.040 "rw_ios_per_sec": 0, 00:15:42.040 "rw_mbytes_per_sec": 0, 00:15:42.040 "r_mbytes_per_sec": 0, 00:15:42.040 "w_mbytes_per_sec": 0 00:15:42.040 }, 00:15:42.040 "claimed": false, 00:15:42.040 "zoned": false, 00:15:42.040 "supported_io_types": { 00:15:42.040 "read": true, 00:15:42.040 "write": true, 00:15:42.040 "unmap": false, 00:15:42.040 "flush": false, 00:15:42.040 "reset": true, 00:15:42.040 "nvme_admin": false, 00:15:42.040 "nvme_io": false, 00:15:42.040 "nvme_io_md": false, 00:15:42.040 "write_zeroes": true, 00:15:42.040 "zcopy": false, 00:15:42.040 "get_zone_info": false, 00:15:42.040 "zone_management": false, 00:15:42.040 "zone_append": false, 00:15:42.040 "compare": false, 00:15:42.040 "compare_and_write": false, 00:15:42.040 "abort": false, 00:15:42.040 "seek_hole": false, 00:15:42.040 "seek_data": false, 00:15:42.040 "copy": false, 00:15:42.040 "nvme_iov_md": false 00:15:42.040 }, 00:15:42.040 "driver_specific": { 00:15:42.040 "raid": { 00:15:42.040 "uuid": "9917f493-0f29-46db-9a88-22b21f39f591", 00:15:42.040 "strip_size_kb": 64, 00:15:42.040 "state": "online", 00:15:42.040 "raid_level": "raid5f", 00:15:42.040 "superblock": true, 00:15:42.040 "num_base_bdevs": 4, 00:15:42.040 "num_base_bdevs_discovered": 4, 00:15:42.040 "num_base_bdevs_operational": 4, 00:15:42.040 "base_bdevs_list": [ 00:15:42.040 { 00:15:42.040 "name": "NewBaseBdev", 00:15:42.040 "uuid": "3e1aa298-7472-47cf-b003-07cdcf401f63", 00:15:42.040 "is_configured": true, 00:15:42.040 "data_offset": 2048, 00:15:42.040 "data_size": 63488 00:15:42.040 }, 00:15:42.040 { 00:15:42.040 "name": "BaseBdev2", 00:15:42.040 "uuid": "a53f6ad1-5ce6-4ea2-bcb2-864e55a62b89", 00:15:42.040 "is_configured": true, 00:15:42.040 "data_offset": 2048, 00:15:42.040 "data_size": 63488 00:15:42.040 }, 00:15:42.040 { 00:15:42.040 "name": "BaseBdev3", 00:15:42.040 "uuid": "85bdd52f-c568-4645-b515-ca0911ff224a", 00:15:42.040 "is_configured": true, 00:15:42.040 "data_offset": 2048, 00:15:42.040 "data_size": 63488 00:15:42.040 }, 00:15:42.040 { 00:15:42.040 "name": "BaseBdev4", 00:15:42.040 "uuid": "63307ce3-9905-44ab-aa9c-92598dc12de1", 00:15:42.040 "is_configured": true, 00:15:42.040 "data_offset": 2048, 00:15:42.040 "data_size": 63488 00:15:42.040 } 00:15:42.040 ] 00:15:42.040 } 00:15:42.040 } 00:15:42.040 }' 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:42.040 BaseBdev2 00:15:42.040 BaseBdev3 00:15:42.040 BaseBdev4' 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.040 03:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.301 03:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.301 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:42.301 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:42.301 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:42.301 03:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.301 03:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.301 [2024-11-21 03:24:29.628101] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:42.301 [2024-11-21 03:24:29.628129] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:42.301 [2024-11-21 03:24:29.628194] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:42.301 [2024-11-21 03:24:29.628463] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:42.301 [2024-11-21 03:24:29.628485] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:42.301 03:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.301 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 95945 00:15:42.301 03:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 95945 ']' 00:15:42.301 03:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 95945 00:15:42.301 03:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:42.301 03:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:42.301 03:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95945 00:15:42.301 killing process with pid 95945 00:15:42.301 03:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:42.301 03:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:42.301 03:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95945' 00:15:42.301 03:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 95945 00:15:42.301 [2024-11-21 03:24:29.676064] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:42.301 03:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 95945 00:15:42.301 [2024-11-21 03:24:29.716609] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:42.562 ************************************ 00:15:42.562 END TEST raid5f_state_function_test_sb 00:15:42.562 ************************************ 00:15:42.562 03:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:42.562 00:15:42.562 real 0m9.672s 00:15:42.562 user 0m16.425s 00:15:42.562 sys 0m2.202s 00:15:42.562 03:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:42.562 03:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.562 03:24:30 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:15:42.562 03:24:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:42.562 03:24:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:42.562 03:24:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:42.562 ************************************ 00:15:42.562 START TEST raid5f_superblock_test 00:15:42.562 ************************************ 00:15:42.562 03:24:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:15:42.562 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:42.562 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:42.562 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:42.562 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:42.562 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:42.562 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:42.562 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:42.562 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:42.562 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:42.562 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:42.562 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:42.562 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:42.562 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:42.562 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:42.562 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:42.562 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:42.562 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=96599 00:15:42.562 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:42.562 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 96599 00:15:42.562 03:24:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 96599 ']' 00:15:42.562 03:24:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.562 03:24:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:42.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.562 03:24:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.562 03:24:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:42.562 03:24:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.562 [2024-11-21 03:24:30.122342] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:15:42.562 [2024-11-21 03:24:30.122525] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96599 ] 00:15:42.823 [2024-11-21 03:24:30.265098] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:42.823 [2024-11-21 03:24:30.303365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.823 [2024-11-21 03:24:30.329875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.823 [2024-11-21 03:24:30.373118] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:42.823 [2024-11-21 03:24:30.373168] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:43.393 03:24:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:43.393 03:24:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:43.393 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:43.393 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:43.393 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:43.393 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:43.393 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:43.393 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:43.393 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:43.393 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:43.393 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:43.393 03:24:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.393 03:24:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.654 malloc1 00:15:43.654 03:24:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.654 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:43.654 03:24:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.654 03:24:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.654 [2024-11-21 03:24:30.981202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:43.654 [2024-11-21 03:24:30.981259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.654 [2024-11-21 03:24:30.981282] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:43.654 [2024-11-21 03:24:30.981293] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.654 [2024-11-21 03:24:30.983473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.654 [2024-11-21 03:24:30.983516] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:43.654 pt1 00:15:43.654 03:24:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.654 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:43.654 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:43.654 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:43.654 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:43.654 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:43.654 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:43.654 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:43.654 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:43.654 03:24:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:43.654 03:24:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.654 03:24:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.654 malloc2 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.654 [2024-11-21 03:24:31.009870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:43.654 [2024-11-21 03:24:31.009919] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.654 [2024-11-21 03:24:31.009935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:43.654 [2024-11-21 03:24:31.009943] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.654 [2024-11-21 03:24:31.011993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.654 [2024-11-21 03:24:31.012049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:43.654 pt2 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.654 malloc3 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.654 [2024-11-21 03:24:31.038487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:43.654 [2024-11-21 03:24:31.038532] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.654 [2024-11-21 03:24:31.038549] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:43.654 [2024-11-21 03:24:31.038557] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.654 [2024-11-21 03:24:31.040604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.654 [2024-11-21 03:24:31.040638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:43.654 pt3 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.654 malloc4 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.654 [2024-11-21 03:24:31.085317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:43.654 [2024-11-21 03:24:31.085416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.654 [2024-11-21 03:24:31.085469] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:43.654 [2024-11-21 03:24:31.085490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.654 [2024-11-21 03:24:31.088870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.654 [2024-11-21 03:24:31.088923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:43.654 pt4 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.654 [2024-11-21 03:24:31.097311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:43.654 [2024-11-21 03:24:31.099381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:43.654 [2024-11-21 03:24:31.099465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:43.654 [2024-11-21 03:24:31.099539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:43.654 [2024-11-21 03:24:31.099721] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:43.654 [2024-11-21 03:24:31.099748] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:43.654 [2024-11-21 03:24:31.100009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:43.654 [2024-11-21 03:24:31.100550] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:43.654 [2024-11-21 03:24:31.100580] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:43.654 [2024-11-21 03:24:31.100718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.654 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:43.655 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.655 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.655 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.655 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.655 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:43.655 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.655 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.655 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.655 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.655 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.655 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.655 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.655 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.655 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.655 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.655 "name": "raid_bdev1", 00:15:43.655 "uuid": "2597597f-b186-4901-9300-10254e282fc4", 00:15:43.655 "strip_size_kb": 64, 00:15:43.655 "state": "online", 00:15:43.655 "raid_level": "raid5f", 00:15:43.655 "superblock": true, 00:15:43.655 "num_base_bdevs": 4, 00:15:43.655 "num_base_bdevs_discovered": 4, 00:15:43.655 "num_base_bdevs_operational": 4, 00:15:43.655 "base_bdevs_list": [ 00:15:43.655 { 00:15:43.655 "name": "pt1", 00:15:43.655 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:43.655 "is_configured": true, 00:15:43.655 "data_offset": 2048, 00:15:43.655 "data_size": 63488 00:15:43.655 }, 00:15:43.655 { 00:15:43.655 "name": "pt2", 00:15:43.655 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:43.655 "is_configured": true, 00:15:43.655 "data_offset": 2048, 00:15:43.655 "data_size": 63488 00:15:43.655 }, 00:15:43.655 { 00:15:43.655 "name": "pt3", 00:15:43.655 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:43.655 "is_configured": true, 00:15:43.655 "data_offset": 2048, 00:15:43.655 "data_size": 63488 00:15:43.655 }, 00:15:43.655 { 00:15:43.655 "name": "pt4", 00:15:43.655 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:43.655 "is_configured": true, 00:15:43.655 "data_offset": 2048, 00:15:43.655 "data_size": 63488 00:15:43.655 } 00:15:43.655 ] 00:15:43.655 }' 00:15:43.655 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.655 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.226 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:44.226 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:44.226 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:44.226 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:44.226 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:44.226 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:44.226 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:44.226 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.226 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.226 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:44.226 [2024-11-21 03:24:31.582959] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:44.226 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.226 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:44.226 "name": "raid_bdev1", 00:15:44.226 "aliases": [ 00:15:44.226 "2597597f-b186-4901-9300-10254e282fc4" 00:15:44.226 ], 00:15:44.226 "product_name": "Raid Volume", 00:15:44.226 "block_size": 512, 00:15:44.226 "num_blocks": 190464, 00:15:44.226 "uuid": "2597597f-b186-4901-9300-10254e282fc4", 00:15:44.226 "assigned_rate_limits": { 00:15:44.226 "rw_ios_per_sec": 0, 00:15:44.226 "rw_mbytes_per_sec": 0, 00:15:44.226 "r_mbytes_per_sec": 0, 00:15:44.226 "w_mbytes_per_sec": 0 00:15:44.226 }, 00:15:44.226 "claimed": false, 00:15:44.226 "zoned": false, 00:15:44.226 "supported_io_types": { 00:15:44.226 "read": true, 00:15:44.226 "write": true, 00:15:44.226 "unmap": false, 00:15:44.226 "flush": false, 00:15:44.226 "reset": true, 00:15:44.226 "nvme_admin": false, 00:15:44.226 "nvme_io": false, 00:15:44.226 "nvme_io_md": false, 00:15:44.226 "write_zeroes": true, 00:15:44.226 "zcopy": false, 00:15:44.226 "get_zone_info": false, 00:15:44.226 "zone_management": false, 00:15:44.226 "zone_append": false, 00:15:44.226 "compare": false, 00:15:44.226 "compare_and_write": false, 00:15:44.226 "abort": false, 00:15:44.226 "seek_hole": false, 00:15:44.226 "seek_data": false, 00:15:44.226 "copy": false, 00:15:44.226 "nvme_iov_md": false 00:15:44.226 }, 00:15:44.226 "driver_specific": { 00:15:44.226 "raid": { 00:15:44.226 "uuid": "2597597f-b186-4901-9300-10254e282fc4", 00:15:44.226 "strip_size_kb": 64, 00:15:44.226 "state": "online", 00:15:44.226 "raid_level": "raid5f", 00:15:44.226 "superblock": true, 00:15:44.226 "num_base_bdevs": 4, 00:15:44.226 "num_base_bdevs_discovered": 4, 00:15:44.226 "num_base_bdevs_operational": 4, 00:15:44.226 "base_bdevs_list": [ 00:15:44.226 { 00:15:44.226 "name": "pt1", 00:15:44.226 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:44.226 "is_configured": true, 00:15:44.226 "data_offset": 2048, 00:15:44.226 "data_size": 63488 00:15:44.226 }, 00:15:44.226 { 00:15:44.226 "name": "pt2", 00:15:44.226 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:44.226 "is_configured": true, 00:15:44.226 "data_offset": 2048, 00:15:44.226 "data_size": 63488 00:15:44.226 }, 00:15:44.226 { 00:15:44.226 "name": "pt3", 00:15:44.226 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:44.226 "is_configured": true, 00:15:44.226 "data_offset": 2048, 00:15:44.226 "data_size": 63488 00:15:44.226 }, 00:15:44.226 { 00:15:44.226 "name": "pt4", 00:15:44.226 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:44.226 "is_configured": true, 00:15:44.226 "data_offset": 2048, 00:15:44.226 "data_size": 63488 00:15:44.226 } 00:15:44.226 ] 00:15:44.226 } 00:15:44.226 } 00:15:44.226 }' 00:15:44.226 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:44.226 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:44.226 pt2 00:15:44.226 pt3 00:15:44.226 pt4' 00:15:44.226 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:44.226 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:44.226 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:44.226 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:44.226 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:44.226 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.226 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.226 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.226 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:44.226 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:44.226 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:44.226 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:44.226 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.226 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:44.226 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.487 [2024-11-21 03:24:31.915042] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2597597f-b186-4901-9300-10254e282fc4 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2597597f-b186-4901-9300-10254e282fc4 ']' 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.487 [2024-11-21 03:24:31.942845] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:44.487 [2024-11-21 03:24:31.942878] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:44.487 [2024-11-21 03:24:31.942952] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:44.487 [2024-11-21 03:24:31.943039] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:44.487 [2024-11-21 03:24:31.943052] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.487 03:24:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.487 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.487 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:44.487 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:44.487 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.487 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.487 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.487 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:44.487 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:44.487 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.487 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.487 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.487 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:44.487 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:44.487 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.487 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.487 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.487 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:44.487 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.487 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:44.487 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.749 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.749 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:44.749 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:44.749 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:44.749 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:44.749 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:44.749 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:44.749 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:44.749 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:44.749 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:44.749 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.749 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.749 [2024-11-21 03:24:32.106980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:44.749 [2024-11-21 03:24:32.108801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:44.749 [2024-11-21 03:24:32.108847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:44.749 [2024-11-21 03:24:32.108875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:44.750 [2024-11-21 03:24:32.108915] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:44.750 [2024-11-21 03:24:32.108956] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:44.750 [2024-11-21 03:24:32.108972] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:44.750 [2024-11-21 03:24:32.108988] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:44.750 [2024-11-21 03:24:32.109000] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:44.750 [2024-11-21 03:24:32.109010] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:15:44.750 request: 00:15:44.750 { 00:15:44.750 "name": "raid_bdev1", 00:15:44.750 "raid_level": "raid5f", 00:15:44.750 "base_bdevs": [ 00:15:44.750 "malloc1", 00:15:44.750 "malloc2", 00:15:44.750 "malloc3", 00:15:44.750 "malloc4" 00:15:44.750 ], 00:15:44.750 "strip_size_kb": 64, 00:15:44.750 "superblock": false, 00:15:44.750 "method": "bdev_raid_create", 00:15:44.750 "req_id": 1 00:15:44.750 } 00:15:44.750 Got JSON-RPC error response 00:15:44.750 response: 00:15:44.750 { 00:15:44.750 "code": -17, 00:15:44.750 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:44.750 } 00:15:44.750 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:44.750 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:44.750 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:44.750 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:44.750 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:44.750 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.750 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:44.750 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.750 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.750 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.750 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:44.750 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:44.750 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:44.750 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.750 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.750 [2024-11-21 03:24:32.174956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:44.750 [2024-11-21 03:24:32.175004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.750 [2024-11-21 03:24:32.175029] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:44.750 [2024-11-21 03:24:32.175041] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.750 [2024-11-21 03:24:32.177047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.750 [2024-11-21 03:24:32.177084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:44.750 [2024-11-21 03:24:32.177140] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:44.750 [2024-11-21 03:24:32.177196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:44.750 pt1 00:15:44.750 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.750 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:44.750 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.750 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.750 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.750 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.750 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:44.750 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.750 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.750 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.750 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.750 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.750 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.750 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.750 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.750 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.750 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.750 "name": "raid_bdev1", 00:15:44.750 "uuid": "2597597f-b186-4901-9300-10254e282fc4", 00:15:44.750 "strip_size_kb": 64, 00:15:44.750 "state": "configuring", 00:15:44.750 "raid_level": "raid5f", 00:15:44.750 "superblock": true, 00:15:44.750 "num_base_bdevs": 4, 00:15:44.750 "num_base_bdevs_discovered": 1, 00:15:44.750 "num_base_bdevs_operational": 4, 00:15:44.750 "base_bdevs_list": [ 00:15:44.750 { 00:15:44.750 "name": "pt1", 00:15:44.750 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:44.750 "is_configured": true, 00:15:44.750 "data_offset": 2048, 00:15:44.750 "data_size": 63488 00:15:44.750 }, 00:15:44.750 { 00:15:44.750 "name": null, 00:15:44.750 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:44.750 "is_configured": false, 00:15:44.750 "data_offset": 2048, 00:15:44.750 "data_size": 63488 00:15:44.750 }, 00:15:44.750 { 00:15:44.750 "name": null, 00:15:44.750 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:44.750 "is_configured": false, 00:15:44.750 "data_offset": 2048, 00:15:44.750 "data_size": 63488 00:15:44.750 }, 00:15:44.750 { 00:15:44.750 "name": null, 00:15:44.750 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:44.750 "is_configured": false, 00:15:44.750 "data_offset": 2048, 00:15:44.750 "data_size": 63488 00:15:44.750 } 00:15:44.750 ] 00:15:44.750 }' 00:15:44.750 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.750 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.021 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:45.021 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:45.021 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.021 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.021 [2024-11-21 03:24:32.571070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:45.021 [2024-11-21 03:24:32.571137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.021 [2024-11-21 03:24:32.571156] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:45.021 [2024-11-21 03:24:32.571169] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.022 [2024-11-21 03:24:32.571494] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.022 [2024-11-21 03:24:32.571521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:45.022 [2024-11-21 03:24:32.571578] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:45.022 [2024-11-21 03:24:32.571602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:45.022 pt2 00:15:45.022 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.022 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:45.022 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.022 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.295 [2024-11-21 03:24:32.583086] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:45.295 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.295 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:45.295 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.295 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.295 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.295 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.295 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:45.295 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.295 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.295 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.295 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.295 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.295 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.295 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.295 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.295 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.295 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.295 "name": "raid_bdev1", 00:15:45.295 "uuid": "2597597f-b186-4901-9300-10254e282fc4", 00:15:45.295 "strip_size_kb": 64, 00:15:45.295 "state": "configuring", 00:15:45.295 "raid_level": "raid5f", 00:15:45.295 "superblock": true, 00:15:45.295 "num_base_bdevs": 4, 00:15:45.295 "num_base_bdevs_discovered": 1, 00:15:45.295 "num_base_bdevs_operational": 4, 00:15:45.295 "base_bdevs_list": [ 00:15:45.295 { 00:15:45.295 "name": "pt1", 00:15:45.295 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:45.295 "is_configured": true, 00:15:45.295 "data_offset": 2048, 00:15:45.295 "data_size": 63488 00:15:45.295 }, 00:15:45.295 { 00:15:45.295 "name": null, 00:15:45.295 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:45.295 "is_configured": false, 00:15:45.295 "data_offset": 0, 00:15:45.295 "data_size": 63488 00:15:45.295 }, 00:15:45.295 { 00:15:45.295 "name": null, 00:15:45.295 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:45.295 "is_configured": false, 00:15:45.295 "data_offset": 2048, 00:15:45.295 "data_size": 63488 00:15:45.295 }, 00:15:45.295 { 00:15:45.295 "name": null, 00:15:45.295 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:45.295 "is_configured": false, 00:15:45.295 "data_offset": 2048, 00:15:45.295 "data_size": 63488 00:15:45.295 } 00:15:45.295 ] 00:15:45.295 }' 00:15:45.295 03:24:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.295 03:24:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.556 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:45.556 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:45.556 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:45.556 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.556 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.556 [2024-11-21 03:24:33.083210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:45.556 [2024-11-21 03:24:33.083277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.556 [2024-11-21 03:24:33.083295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:45.556 [2024-11-21 03:24:33.083304] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.556 [2024-11-21 03:24:33.083669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.556 [2024-11-21 03:24:33.083695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:45.556 [2024-11-21 03:24:33.083758] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:45.556 [2024-11-21 03:24:33.083777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:45.556 pt2 00:15:45.556 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.556 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:45.556 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:45.556 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:45.556 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.556 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.556 [2024-11-21 03:24:33.095208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:45.556 [2024-11-21 03:24:33.095254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.556 [2024-11-21 03:24:33.095269] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:45.556 [2024-11-21 03:24:33.095277] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.556 [2024-11-21 03:24:33.095594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.556 [2024-11-21 03:24:33.095615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:45.556 [2024-11-21 03:24:33.095668] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:45.556 [2024-11-21 03:24:33.095684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:45.556 pt3 00:15:45.556 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.556 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:45.556 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:45.556 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:45.556 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.556 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.556 [2024-11-21 03:24:33.107217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:45.556 [2024-11-21 03:24:33.107263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.556 [2024-11-21 03:24:33.107283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:45.556 [2024-11-21 03:24:33.107290] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.556 [2024-11-21 03:24:33.107583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.556 [2024-11-21 03:24:33.107603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:45.556 [2024-11-21 03:24:33.107658] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:45.556 [2024-11-21 03:24:33.107674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:45.556 [2024-11-21 03:24:33.107771] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:45.556 [2024-11-21 03:24:33.107791] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:45.556 [2024-11-21 03:24:33.107993] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:45.556 [2024-11-21 03:24:33.108463] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:45.556 [2024-11-21 03:24:33.108486] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:45.556 [2024-11-21 03:24:33.108580] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.556 pt4 00:15:45.556 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.556 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:45.556 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:45.556 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:45.556 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.556 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.557 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.557 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.557 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:45.557 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.557 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.557 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.557 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.817 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.817 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.817 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.817 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.817 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.817 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.817 "name": "raid_bdev1", 00:15:45.817 "uuid": "2597597f-b186-4901-9300-10254e282fc4", 00:15:45.817 "strip_size_kb": 64, 00:15:45.817 "state": "online", 00:15:45.817 "raid_level": "raid5f", 00:15:45.817 "superblock": true, 00:15:45.817 "num_base_bdevs": 4, 00:15:45.817 "num_base_bdevs_discovered": 4, 00:15:45.817 "num_base_bdevs_operational": 4, 00:15:45.817 "base_bdevs_list": [ 00:15:45.817 { 00:15:45.817 "name": "pt1", 00:15:45.817 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:45.817 "is_configured": true, 00:15:45.817 "data_offset": 2048, 00:15:45.817 "data_size": 63488 00:15:45.817 }, 00:15:45.817 { 00:15:45.817 "name": "pt2", 00:15:45.817 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:45.817 "is_configured": true, 00:15:45.817 "data_offset": 2048, 00:15:45.817 "data_size": 63488 00:15:45.817 }, 00:15:45.817 { 00:15:45.817 "name": "pt3", 00:15:45.817 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:45.817 "is_configured": true, 00:15:45.817 "data_offset": 2048, 00:15:45.817 "data_size": 63488 00:15:45.817 }, 00:15:45.817 { 00:15:45.817 "name": "pt4", 00:15:45.817 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:45.817 "is_configured": true, 00:15:45.817 "data_offset": 2048, 00:15:45.817 "data_size": 63488 00:15:45.817 } 00:15:45.817 ] 00:15:45.817 }' 00:15:45.817 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.817 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.076 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:46.077 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:46.077 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:46.077 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:46.077 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:46.077 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:46.077 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:46.077 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:46.077 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.077 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.077 [2024-11-21 03:24:33.623503] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:46.337 "name": "raid_bdev1", 00:15:46.337 "aliases": [ 00:15:46.337 "2597597f-b186-4901-9300-10254e282fc4" 00:15:46.337 ], 00:15:46.337 "product_name": "Raid Volume", 00:15:46.337 "block_size": 512, 00:15:46.337 "num_blocks": 190464, 00:15:46.337 "uuid": "2597597f-b186-4901-9300-10254e282fc4", 00:15:46.337 "assigned_rate_limits": { 00:15:46.337 "rw_ios_per_sec": 0, 00:15:46.337 "rw_mbytes_per_sec": 0, 00:15:46.337 "r_mbytes_per_sec": 0, 00:15:46.337 "w_mbytes_per_sec": 0 00:15:46.337 }, 00:15:46.337 "claimed": false, 00:15:46.337 "zoned": false, 00:15:46.337 "supported_io_types": { 00:15:46.337 "read": true, 00:15:46.337 "write": true, 00:15:46.337 "unmap": false, 00:15:46.337 "flush": false, 00:15:46.337 "reset": true, 00:15:46.337 "nvme_admin": false, 00:15:46.337 "nvme_io": false, 00:15:46.337 "nvme_io_md": false, 00:15:46.337 "write_zeroes": true, 00:15:46.337 "zcopy": false, 00:15:46.337 "get_zone_info": false, 00:15:46.337 "zone_management": false, 00:15:46.337 "zone_append": false, 00:15:46.337 "compare": false, 00:15:46.337 "compare_and_write": false, 00:15:46.337 "abort": false, 00:15:46.337 "seek_hole": false, 00:15:46.337 "seek_data": false, 00:15:46.337 "copy": false, 00:15:46.337 "nvme_iov_md": false 00:15:46.337 }, 00:15:46.337 "driver_specific": { 00:15:46.337 "raid": { 00:15:46.337 "uuid": "2597597f-b186-4901-9300-10254e282fc4", 00:15:46.337 "strip_size_kb": 64, 00:15:46.337 "state": "online", 00:15:46.337 "raid_level": "raid5f", 00:15:46.337 "superblock": true, 00:15:46.337 "num_base_bdevs": 4, 00:15:46.337 "num_base_bdevs_discovered": 4, 00:15:46.337 "num_base_bdevs_operational": 4, 00:15:46.337 "base_bdevs_list": [ 00:15:46.337 { 00:15:46.337 "name": "pt1", 00:15:46.337 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:46.337 "is_configured": true, 00:15:46.337 "data_offset": 2048, 00:15:46.337 "data_size": 63488 00:15:46.337 }, 00:15:46.337 { 00:15:46.337 "name": "pt2", 00:15:46.337 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:46.337 "is_configured": true, 00:15:46.337 "data_offset": 2048, 00:15:46.337 "data_size": 63488 00:15:46.337 }, 00:15:46.337 { 00:15:46.337 "name": "pt3", 00:15:46.337 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:46.337 "is_configured": true, 00:15:46.337 "data_offset": 2048, 00:15:46.337 "data_size": 63488 00:15:46.337 }, 00:15:46.337 { 00:15:46.337 "name": "pt4", 00:15:46.337 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:46.337 "is_configured": true, 00:15:46.337 "data_offset": 2048, 00:15:46.337 "data_size": 63488 00:15:46.337 } 00:15:46.337 ] 00:15:46.337 } 00:15:46.337 } 00:15:46.337 }' 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:46.337 pt2 00:15:46.337 pt3 00:15:46.337 pt4' 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.337 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.598 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.598 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.598 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.598 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:46.598 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.598 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.598 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:46.598 [2024-11-21 03:24:33.927611] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:46.598 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.598 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2597597f-b186-4901-9300-10254e282fc4 '!=' 2597597f-b186-4901-9300-10254e282fc4 ']' 00:15:46.598 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:46.598 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:46.598 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:46.598 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:46.598 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.598 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.598 [2024-11-21 03:24:33.975495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:46.598 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.598 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:46.598 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.598 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.598 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.598 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.598 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.598 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.598 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.598 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.598 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.598 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.598 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.598 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.598 03:24:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.598 03:24:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.598 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.598 "name": "raid_bdev1", 00:15:46.598 "uuid": "2597597f-b186-4901-9300-10254e282fc4", 00:15:46.598 "strip_size_kb": 64, 00:15:46.598 "state": "online", 00:15:46.598 "raid_level": "raid5f", 00:15:46.598 "superblock": true, 00:15:46.598 "num_base_bdevs": 4, 00:15:46.598 "num_base_bdevs_discovered": 3, 00:15:46.598 "num_base_bdevs_operational": 3, 00:15:46.598 "base_bdevs_list": [ 00:15:46.599 { 00:15:46.599 "name": null, 00:15:46.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.599 "is_configured": false, 00:15:46.599 "data_offset": 0, 00:15:46.599 "data_size": 63488 00:15:46.599 }, 00:15:46.599 { 00:15:46.599 "name": "pt2", 00:15:46.599 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:46.599 "is_configured": true, 00:15:46.599 "data_offset": 2048, 00:15:46.599 "data_size": 63488 00:15:46.599 }, 00:15:46.599 { 00:15:46.599 "name": "pt3", 00:15:46.599 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:46.599 "is_configured": true, 00:15:46.599 "data_offset": 2048, 00:15:46.599 "data_size": 63488 00:15:46.599 }, 00:15:46.599 { 00:15:46.599 "name": "pt4", 00:15:46.599 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:46.599 "is_configured": true, 00:15:46.599 "data_offset": 2048, 00:15:46.599 "data_size": 63488 00:15:46.599 } 00:15:46.599 ] 00:15:46.599 }' 00:15:46.599 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.599 03:24:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.169 [2024-11-21 03:24:34.447581] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:47.169 [2024-11-21 03:24:34.447611] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:47.169 [2024-11-21 03:24:34.447677] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:47.169 [2024-11-21 03:24:34.447742] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:47.169 [2024-11-21 03:24:34.447752] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.169 [2024-11-21 03:24:34.543587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:47.169 [2024-11-21 03:24:34.543638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.169 [2024-11-21 03:24:34.543655] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:47.169 [2024-11-21 03:24:34.543663] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.169 [2024-11-21 03:24:34.545764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.169 [2024-11-21 03:24:34.545799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:47.169 [2024-11-21 03:24:34.545859] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:47.169 [2024-11-21 03:24:34.545889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:47.169 pt2 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.169 03:24:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.170 03:24:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.170 03:24:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.170 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.170 "name": "raid_bdev1", 00:15:47.170 "uuid": "2597597f-b186-4901-9300-10254e282fc4", 00:15:47.170 "strip_size_kb": 64, 00:15:47.170 "state": "configuring", 00:15:47.170 "raid_level": "raid5f", 00:15:47.170 "superblock": true, 00:15:47.170 "num_base_bdevs": 4, 00:15:47.170 "num_base_bdevs_discovered": 1, 00:15:47.170 "num_base_bdevs_operational": 3, 00:15:47.170 "base_bdevs_list": [ 00:15:47.170 { 00:15:47.170 "name": null, 00:15:47.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.170 "is_configured": false, 00:15:47.170 "data_offset": 2048, 00:15:47.170 "data_size": 63488 00:15:47.170 }, 00:15:47.170 { 00:15:47.170 "name": "pt2", 00:15:47.170 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:47.170 "is_configured": true, 00:15:47.170 "data_offset": 2048, 00:15:47.170 "data_size": 63488 00:15:47.170 }, 00:15:47.170 { 00:15:47.170 "name": null, 00:15:47.170 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:47.170 "is_configured": false, 00:15:47.170 "data_offset": 2048, 00:15:47.170 "data_size": 63488 00:15:47.170 }, 00:15:47.170 { 00:15:47.170 "name": null, 00:15:47.170 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:47.170 "is_configured": false, 00:15:47.170 "data_offset": 2048, 00:15:47.170 "data_size": 63488 00:15:47.170 } 00:15:47.170 ] 00:15:47.170 }' 00:15:47.170 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.170 03:24:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.429 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:47.429 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:47.429 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:47.429 03:24:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.429 03:24:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.690 [2024-11-21 03:24:34.995738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:47.690 [2024-11-21 03:24:34.995800] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.690 [2024-11-21 03:24:34.995818] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:47.690 [2024-11-21 03:24:34.995827] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.690 [2024-11-21 03:24:34.996172] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.690 [2024-11-21 03:24:34.996198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:47.690 [2024-11-21 03:24:34.996259] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:47.690 [2024-11-21 03:24:34.996287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:47.690 pt3 00:15:47.690 03:24:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.690 03:24:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:47.690 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.690 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:47.690 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.690 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.690 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:47.690 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.690 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.690 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.690 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.690 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.690 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.690 03:24:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.690 03:24:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.690 03:24:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.690 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.690 "name": "raid_bdev1", 00:15:47.690 "uuid": "2597597f-b186-4901-9300-10254e282fc4", 00:15:47.690 "strip_size_kb": 64, 00:15:47.690 "state": "configuring", 00:15:47.690 "raid_level": "raid5f", 00:15:47.690 "superblock": true, 00:15:47.690 "num_base_bdevs": 4, 00:15:47.690 "num_base_bdevs_discovered": 2, 00:15:47.690 "num_base_bdevs_operational": 3, 00:15:47.690 "base_bdevs_list": [ 00:15:47.690 { 00:15:47.690 "name": null, 00:15:47.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.690 "is_configured": false, 00:15:47.690 "data_offset": 2048, 00:15:47.690 "data_size": 63488 00:15:47.690 }, 00:15:47.690 { 00:15:47.690 "name": "pt2", 00:15:47.690 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:47.690 "is_configured": true, 00:15:47.690 "data_offset": 2048, 00:15:47.690 "data_size": 63488 00:15:47.690 }, 00:15:47.690 { 00:15:47.690 "name": "pt3", 00:15:47.690 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:47.690 "is_configured": true, 00:15:47.690 "data_offset": 2048, 00:15:47.690 "data_size": 63488 00:15:47.690 }, 00:15:47.690 { 00:15:47.690 "name": null, 00:15:47.690 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:47.690 "is_configured": false, 00:15:47.690 "data_offset": 2048, 00:15:47.690 "data_size": 63488 00:15:47.690 } 00:15:47.690 ] 00:15:47.690 }' 00:15:47.690 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.690 03:24:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.951 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:47.951 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:47.951 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:15:47.951 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:47.951 03:24:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.951 03:24:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.951 [2024-11-21 03:24:35.427856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:47.951 [2024-11-21 03:24:35.427926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.951 [2024-11-21 03:24:35.427946] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:47.951 [2024-11-21 03:24:35.427954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.951 [2024-11-21 03:24:35.428327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.951 [2024-11-21 03:24:35.428352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:47.951 [2024-11-21 03:24:35.428419] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:47.951 [2024-11-21 03:24:35.428439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:47.951 [2024-11-21 03:24:35.428531] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:47.951 [2024-11-21 03:24:35.428544] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:47.951 [2024-11-21 03:24:35.428760] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:15:47.951 [2024-11-21 03:24:35.429282] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:47.951 [2024-11-21 03:24:35.429306] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:47.951 [2024-11-21 03:24:35.429516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.951 pt4 00:15:47.951 03:24:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.951 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:47.951 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.951 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.951 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.951 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.951 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:47.951 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.951 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.951 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.951 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.951 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.951 03:24:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.951 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.951 03:24:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.951 03:24:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.951 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.951 "name": "raid_bdev1", 00:15:47.951 "uuid": "2597597f-b186-4901-9300-10254e282fc4", 00:15:47.951 "strip_size_kb": 64, 00:15:47.951 "state": "online", 00:15:47.951 "raid_level": "raid5f", 00:15:47.951 "superblock": true, 00:15:47.951 "num_base_bdevs": 4, 00:15:47.951 "num_base_bdevs_discovered": 3, 00:15:47.951 "num_base_bdevs_operational": 3, 00:15:47.951 "base_bdevs_list": [ 00:15:47.951 { 00:15:47.951 "name": null, 00:15:47.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.951 "is_configured": false, 00:15:47.951 "data_offset": 2048, 00:15:47.951 "data_size": 63488 00:15:47.951 }, 00:15:47.951 { 00:15:47.951 "name": "pt2", 00:15:47.951 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:47.951 "is_configured": true, 00:15:47.951 "data_offset": 2048, 00:15:47.951 "data_size": 63488 00:15:47.951 }, 00:15:47.951 { 00:15:47.951 "name": "pt3", 00:15:47.951 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:47.951 "is_configured": true, 00:15:47.951 "data_offset": 2048, 00:15:47.951 "data_size": 63488 00:15:47.951 }, 00:15:47.951 { 00:15:47.951 "name": "pt4", 00:15:47.951 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:47.951 "is_configured": true, 00:15:47.951 "data_offset": 2048, 00:15:47.951 "data_size": 63488 00:15:47.951 } 00:15:47.951 ] 00:15:47.951 }' 00:15:47.951 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.951 03:24:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.522 [2024-11-21 03:24:35.887961] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:48.522 [2024-11-21 03:24:35.887987] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:48.522 [2024-11-21 03:24:35.888051] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:48.522 [2024-11-21 03:24:35.888114] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:48.522 [2024-11-21 03:24:35.888127] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.522 [2024-11-21 03:24:35.939999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:48.522 [2024-11-21 03:24:35.940066] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.522 [2024-11-21 03:24:35.940082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:15:48.522 [2024-11-21 03:24:35.940093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.522 [2024-11-21 03:24:35.942293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.522 [2024-11-21 03:24:35.942331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:48.522 [2024-11-21 03:24:35.942384] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:48.522 [2024-11-21 03:24:35.942422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:48.522 [2024-11-21 03:24:35.942519] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:48.522 [2024-11-21 03:24:35.942541] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:48.522 [2024-11-21 03:24:35.942557] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:15:48.522 [2024-11-21 03:24:35.942602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:48.522 [2024-11-21 03:24:35.942691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:48.522 pt1 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.522 "name": "raid_bdev1", 00:15:48.522 "uuid": "2597597f-b186-4901-9300-10254e282fc4", 00:15:48.522 "strip_size_kb": 64, 00:15:48.522 "state": "configuring", 00:15:48.522 "raid_level": "raid5f", 00:15:48.522 "superblock": true, 00:15:48.522 "num_base_bdevs": 4, 00:15:48.522 "num_base_bdevs_discovered": 2, 00:15:48.522 "num_base_bdevs_operational": 3, 00:15:48.522 "base_bdevs_list": [ 00:15:48.522 { 00:15:48.522 "name": null, 00:15:48.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.522 "is_configured": false, 00:15:48.522 "data_offset": 2048, 00:15:48.522 "data_size": 63488 00:15:48.522 }, 00:15:48.522 { 00:15:48.522 "name": "pt2", 00:15:48.522 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:48.522 "is_configured": true, 00:15:48.522 "data_offset": 2048, 00:15:48.522 "data_size": 63488 00:15:48.522 }, 00:15:48.522 { 00:15:48.522 "name": "pt3", 00:15:48.522 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:48.522 "is_configured": true, 00:15:48.522 "data_offset": 2048, 00:15:48.522 "data_size": 63488 00:15:48.522 }, 00:15:48.522 { 00:15:48.522 "name": null, 00:15:48.522 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:48.522 "is_configured": false, 00:15:48.522 "data_offset": 2048, 00:15:48.522 "data_size": 63488 00:15:48.522 } 00:15:48.522 ] 00:15:48.522 }' 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.522 03:24:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.092 03:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:49.092 03:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:49.092 03:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.092 03:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.092 03:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.092 03:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:49.092 03:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:49.092 03:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.092 03:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.092 [2024-11-21 03:24:36.436117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:49.092 [2024-11-21 03:24:36.436171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.092 [2024-11-21 03:24:36.436202] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:49.092 [2024-11-21 03:24:36.436211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.092 [2024-11-21 03:24:36.436564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.092 [2024-11-21 03:24:36.436590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:49.092 [2024-11-21 03:24:36.436652] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:49.092 [2024-11-21 03:24:36.436670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:49.092 [2024-11-21 03:24:36.436761] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:49.092 [2024-11-21 03:24:36.436774] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:49.092 [2024-11-21 03:24:36.437013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:15:49.092 [2024-11-21 03:24:36.437567] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:49.092 [2024-11-21 03:24:36.437592] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:49.092 [2024-11-21 03:24:36.437761] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.092 pt4 00:15:49.092 03:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.092 03:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:49.092 03:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.092 03:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.093 03:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.093 03:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.093 03:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.093 03:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.093 03:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.093 03:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.093 03:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.093 03:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.093 03:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.093 03:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.093 03:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.093 03:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.093 03:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.093 "name": "raid_bdev1", 00:15:49.093 "uuid": "2597597f-b186-4901-9300-10254e282fc4", 00:15:49.093 "strip_size_kb": 64, 00:15:49.093 "state": "online", 00:15:49.093 "raid_level": "raid5f", 00:15:49.093 "superblock": true, 00:15:49.093 "num_base_bdevs": 4, 00:15:49.093 "num_base_bdevs_discovered": 3, 00:15:49.093 "num_base_bdevs_operational": 3, 00:15:49.093 "base_bdevs_list": [ 00:15:49.093 { 00:15:49.093 "name": null, 00:15:49.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.093 "is_configured": false, 00:15:49.093 "data_offset": 2048, 00:15:49.093 "data_size": 63488 00:15:49.093 }, 00:15:49.093 { 00:15:49.093 "name": "pt2", 00:15:49.093 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.093 "is_configured": true, 00:15:49.093 "data_offset": 2048, 00:15:49.093 "data_size": 63488 00:15:49.093 }, 00:15:49.093 { 00:15:49.093 "name": "pt3", 00:15:49.093 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:49.093 "is_configured": true, 00:15:49.093 "data_offset": 2048, 00:15:49.093 "data_size": 63488 00:15:49.093 }, 00:15:49.093 { 00:15:49.093 "name": "pt4", 00:15:49.093 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:49.093 "is_configured": true, 00:15:49.093 "data_offset": 2048, 00:15:49.093 "data_size": 63488 00:15:49.093 } 00:15:49.093 ] 00:15:49.093 }' 00:15:49.093 03:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.093 03:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.353 03:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:49.353 03:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:49.353 03:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.353 03:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.613 03:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.613 03:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:49.613 03:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:49.613 03:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:49.613 03:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.613 03:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.613 [2024-11-21 03:24:36.960452] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:49.613 03:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.614 03:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 2597597f-b186-4901-9300-10254e282fc4 '!=' 2597597f-b186-4901-9300-10254e282fc4 ']' 00:15:49.614 03:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 96599 00:15:49.614 03:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 96599 ']' 00:15:49.614 03:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 96599 00:15:49.614 03:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:49.614 03:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:49.614 03:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96599 00:15:49.614 03:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:49.614 03:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:49.614 killing process with pid 96599 00:15:49.614 03:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96599' 00:15:49.614 03:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 96599 00:15:49.614 [2024-11-21 03:24:37.027717] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:49.614 [2024-11-21 03:24:37.027799] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:49.614 [2024-11-21 03:24:37.027863] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:49.614 [2024-11-21 03:24:37.027875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:49.614 03:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 96599 00:15:49.614 [2024-11-21 03:24:37.071222] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:49.874 03:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:49.874 00:15:49.874 real 0m7.274s 00:15:49.874 user 0m12.236s 00:15:49.874 sys 0m1.604s 00:15:49.874 03:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:49.874 03:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.874 ************************************ 00:15:49.874 END TEST raid5f_superblock_test 00:15:49.874 ************************************ 00:15:49.874 03:24:37 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:49.874 03:24:37 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:15:49.874 03:24:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:49.874 03:24:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:49.874 03:24:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:49.874 ************************************ 00:15:49.874 START TEST raid5f_rebuild_test 00:15:49.874 ************************************ 00:15:49.874 03:24:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:15:49.874 03:24:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:49.874 03:24:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=97073 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 97073 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 97073 ']' 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:49.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:49.875 03:24:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.135 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:50.135 Zero copy mechanism will not be used. 00:15:50.135 [2024-11-21 03:24:37.475752] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:15:50.135 [2024-11-21 03:24:37.475870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97073 ] 00:15:50.135 [2024-11-21 03:24:37.614362] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:50.135 [2024-11-21 03:24:37.652381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.135 [2024-11-21 03:24:37.679809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.395 [2024-11-21 03:24:37.724087] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:50.395 [2024-11-21 03:24:37.724128] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:50.966 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:50.966 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:50.966 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:50.966 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:50.966 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.966 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.966 BaseBdev1_malloc 00:15:50.966 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.966 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:50.966 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.967 [2024-11-21 03:24:38.295736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:50.967 [2024-11-21 03:24:38.295823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.967 [2024-11-21 03:24:38.295859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:50.967 [2024-11-21 03:24:38.295873] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.967 [2024-11-21 03:24:38.297870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.967 [2024-11-21 03:24:38.297909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:50.967 BaseBdev1 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.967 BaseBdev2_malloc 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.967 [2024-11-21 03:24:38.324273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:50.967 [2024-11-21 03:24:38.324324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.967 [2024-11-21 03:24:38.324341] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:50.967 [2024-11-21 03:24:38.324350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.967 [2024-11-21 03:24:38.326293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.967 [2024-11-21 03:24:38.326330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:50.967 BaseBdev2 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.967 BaseBdev3_malloc 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.967 [2024-11-21 03:24:38.352726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:50.967 [2024-11-21 03:24:38.352775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.967 [2024-11-21 03:24:38.352793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:50.967 [2024-11-21 03:24:38.352802] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.967 [2024-11-21 03:24:38.354741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.967 [2024-11-21 03:24:38.354780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:50.967 BaseBdev3 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.967 BaseBdev4_malloc 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.967 [2024-11-21 03:24:38.392314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:50.967 [2024-11-21 03:24:38.392369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.967 [2024-11-21 03:24:38.392390] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:50.967 [2024-11-21 03:24:38.392401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.967 [2024-11-21 03:24:38.394579] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.967 [2024-11-21 03:24:38.394618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:50.967 BaseBdev4 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.967 spare_malloc 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.967 spare_delay 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.967 [2024-11-21 03:24:38.432973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:50.967 [2024-11-21 03:24:38.433038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.967 [2024-11-21 03:24:38.433060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:50.967 [2024-11-21 03:24:38.433071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.967 [2024-11-21 03:24:38.435064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.967 [2024-11-21 03:24:38.435101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:50.967 spare 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.967 [2024-11-21 03:24:38.445065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.967 [2024-11-21 03:24:38.446796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:50.967 [2024-11-21 03:24:38.446869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:50.967 [2024-11-21 03:24:38.446910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:50.967 [2024-11-21 03:24:38.446987] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:50.967 [2024-11-21 03:24:38.447034] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:50.967 [2024-11-21 03:24:38.447273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:50.967 [2024-11-21 03:24:38.447722] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:50.967 [2024-11-21 03:24:38.447741] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:50.967 [2024-11-21 03:24:38.447848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.967 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.967 "name": "raid_bdev1", 00:15:50.967 "uuid": "ddabc5fb-2a1b-4561-9d35-ac2cb9e4be9c", 00:15:50.967 "strip_size_kb": 64, 00:15:50.967 "state": "online", 00:15:50.967 "raid_level": "raid5f", 00:15:50.967 "superblock": false, 00:15:50.967 "num_base_bdevs": 4, 00:15:50.967 "num_base_bdevs_discovered": 4, 00:15:50.967 "num_base_bdevs_operational": 4, 00:15:50.967 "base_bdevs_list": [ 00:15:50.967 { 00:15:50.967 "name": "BaseBdev1", 00:15:50.967 "uuid": "3bf8ff1a-bf47-51e7-a113-f3974b9547dd", 00:15:50.967 "is_configured": true, 00:15:50.968 "data_offset": 0, 00:15:50.968 "data_size": 65536 00:15:50.968 }, 00:15:50.968 { 00:15:50.968 "name": "BaseBdev2", 00:15:50.968 "uuid": "29905f56-0689-54ad-9d97-874b857aa8ab", 00:15:50.968 "is_configured": true, 00:15:50.968 "data_offset": 0, 00:15:50.968 "data_size": 65536 00:15:50.968 }, 00:15:50.968 { 00:15:50.968 "name": "BaseBdev3", 00:15:50.968 "uuid": "5e649475-0764-5ddd-ba47-ed543c9c485f", 00:15:50.968 "is_configured": true, 00:15:50.968 "data_offset": 0, 00:15:50.968 "data_size": 65536 00:15:50.968 }, 00:15:50.968 { 00:15:50.968 "name": "BaseBdev4", 00:15:50.968 "uuid": "2d7de38d-55a5-558e-a679-7eb10d900fdf", 00:15:50.968 "is_configured": true, 00:15:50.968 "data_offset": 0, 00:15:50.968 "data_size": 65536 00:15:50.968 } 00:15:50.968 ] 00:15:50.968 }' 00:15:50.968 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.968 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.539 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:51.539 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:51.539 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.539 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.539 [2024-11-21 03:24:38.905805] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:51.539 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.539 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:15:51.539 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.539 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:51.539 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.539 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.539 03:24:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.539 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:51.539 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:51.539 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:51.539 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:51.539 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:51.539 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:51.539 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:51.539 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:51.539 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:51.539 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:51.539 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:51.539 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:51.539 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:51.539 03:24:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:51.798 [2024-11-21 03:24:39.161791] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:15:51.798 /dev/nbd0 00:15:51.798 03:24:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:51.798 03:24:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:51.798 03:24:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:51.798 03:24:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:51.798 03:24:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:51.798 03:24:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:51.798 03:24:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:51.798 03:24:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:51.798 03:24:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:51.798 03:24:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:51.798 03:24:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:51.798 1+0 records in 00:15:51.798 1+0 records out 00:15:51.798 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000646654 s, 6.3 MB/s 00:15:51.798 03:24:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.798 03:24:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:51.798 03:24:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.798 03:24:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:51.798 03:24:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:51.798 03:24:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:51.798 03:24:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:51.798 03:24:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:51.798 03:24:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:15:51.798 03:24:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:15:51.798 03:24:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:15:52.368 512+0 records in 00:15:52.368 512+0 records out 00:15:52.368 100663296 bytes (101 MB, 96 MiB) copied, 0.532787 s, 189 MB/s 00:15:52.368 03:24:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:52.368 03:24:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:52.368 03:24:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:52.368 03:24:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:52.368 03:24:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:52.368 03:24:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:52.368 03:24:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:52.628 [2024-11-21 03:24:39.970114] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.628 03:24:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:52.628 03:24:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:52.628 03:24:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:52.628 03:24:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:52.628 03:24:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:52.628 03:24:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:52.628 03:24:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:52.628 03:24:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:52.628 03:24:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:52.628 03:24:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.628 03:24:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.628 [2024-11-21 03:24:39.995780] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:52.628 03:24:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.628 03:24:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:52.628 03:24:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.628 03:24:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.628 03:24:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.628 03:24:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.628 03:24:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:52.628 03:24:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.628 03:24:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.628 03:24:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.628 03:24:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.628 03:24:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.628 03:24:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.628 03:24:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.628 03:24:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.628 03:24:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.628 03:24:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.628 "name": "raid_bdev1", 00:15:52.628 "uuid": "ddabc5fb-2a1b-4561-9d35-ac2cb9e4be9c", 00:15:52.628 "strip_size_kb": 64, 00:15:52.628 "state": "online", 00:15:52.628 "raid_level": "raid5f", 00:15:52.628 "superblock": false, 00:15:52.628 "num_base_bdevs": 4, 00:15:52.628 "num_base_bdevs_discovered": 3, 00:15:52.628 "num_base_bdevs_operational": 3, 00:15:52.628 "base_bdevs_list": [ 00:15:52.628 { 00:15:52.628 "name": null, 00:15:52.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.628 "is_configured": false, 00:15:52.628 "data_offset": 0, 00:15:52.628 "data_size": 65536 00:15:52.628 }, 00:15:52.628 { 00:15:52.628 "name": "BaseBdev2", 00:15:52.628 "uuid": "29905f56-0689-54ad-9d97-874b857aa8ab", 00:15:52.628 "is_configured": true, 00:15:52.628 "data_offset": 0, 00:15:52.628 "data_size": 65536 00:15:52.628 }, 00:15:52.628 { 00:15:52.628 "name": "BaseBdev3", 00:15:52.628 "uuid": "5e649475-0764-5ddd-ba47-ed543c9c485f", 00:15:52.628 "is_configured": true, 00:15:52.628 "data_offset": 0, 00:15:52.628 "data_size": 65536 00:15:52.628 }, 00:15:52.628 { 00:15:52.628 "name": "BaseBdev4", 00:15:52.628 "uuid": "2d7de38d-55a5-558e-a679-7eb10d900fdf", 00:15:52.628 "is_configured": true, 00:15:52.628 "data_offset": 0, 00:15:52.628 "data_size": 65536 00:15:52.628 } 00:15:52.628 ] 00:15:52.628 }' 00:15:52.628 03:24:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.628 03:24:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.888 03:24:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:52.888 03:24:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.888 03:24:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.888 [2024-11-21 03:24:40.447928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:53.148 [2024-11-21 03:24:40.452285] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002bb60 00:15:53.148 03:24:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.148 03:24:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:53.148 [2024-11-21 03:24:40.454479] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:54.087 03:24:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:54.087 03:24:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.087 03:24:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:54.087 03:24:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:54.087 03:24:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.087 03:24:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.087 03:24:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.087 03:24:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.087 03:24:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.087 03:24:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.087 03:24:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.087 "name": "raid_bdev1", 00:15:54.087 "uuid": "ddabc5fb-2a1b-4561-9d35-ac2cb9e4be9c", 00:15:54.087 "strip_size_kb": 64, 00:15:54.087 "state": "online", 00:15:54.087 "raid_level": "raid5f", 00:15:54.087 "superblock": false, 00:15:54.087 "num_base_bdevs": 4, 00:15:54.087 "num_base_bdevs_discovered": 4, 00:15:54.087 "num_base_bdevs_operational": 4, 00:15:54.087 "process": { 00:15:54.087 "type": "rebuild", 00:15:54.087 "target": "spare", 00:15:54.087 "progress": { 00:15:54.087 "blocks": 19200, 00:15:54.087 "percent": 9 00:15:54.087 } 00:15:54.087 }, 00:15:54.087 "base_bdevs_list": [ 00:15:54.087 { 00:15:54.087 "name": "spare", 00:15:54.087 "uuid": "a234b911-e1a0-5208-93df-e2b3481a8ce5", 00:15:54.087 "is_configured": true, 00:15:54.087 "data_offset": 0, 00:15:54.087 "data_size": 65536 00:15:54.087 }, 00:15:54.087 { 00:15:54.087 "name": "BaseBdev2", 00:15:54.087 "uuid": "29905f56-0689-54ad-9d97-874b857aa8ab", 00:15:54.087 "is_configured": true, 00:15:54.087 "data_offset": 0, 00:15:54.087 "data_size": 65536 00:15:54.087 }, 00:15:54.087 { 00:15:54.087 "name": "BaseBdev3", 00:15:54.087 "uuid": "5e649475-0764-5ddd-ba47-ed543c9c485f", 00:15:54.087 "is_configured": true, 00:15:54.087 "data_offset": 0, 00:15:54.087 "data_size": 65536 00:15:54.087 }, 00:15:54.087 { 00:15:54.087 "name": "BaseBdev4", 00:15:54.087 "uuid": "2d7de38d-55a5-558e-a679-7eb10d900fdf", 00:15:54.087 "is_configured": true, 00:15:54.087 "data_offset": 0, 00:15:54.087 "data_size": 65536 00:15:54.087 } 00:15:54.087 ] 00:15:54.087 }' 00:15:54.087 03:24:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.087 03:24:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:54.087 03:24:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.087 03:24:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:54.087 03:24:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:54.087 03:24:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.087 03:24:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.087 [2024-11-21 03:24:41.617400] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:54.347 [2024-11-21 03:24:41.662003] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:54.347 [2024-11-21 03:24:41.662072] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.347 [2024-11-21 03:24:41.662088] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:54.347 [2024-11-21 03:24:41.662099] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:54.347 03:24:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.347 03:24:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:54.347 03:24:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.347 03:24:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.347 03:24:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.347 03:24:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.347 03:24:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:54.347 03:24:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.347 03:24:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.347 03:24:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.347 03:24:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.347 03:24:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.347 03:24:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.347 03:24:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.347 03:24:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.347 03:24:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.347 03:24:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.347 "name": "raid_bdev1", 00:15:54.347 "uuid": "ddabc5fb-2a1b-4561-9d35-ac2cb9e4be9c", 00:15:54.347 "strip_size_kb": 64, 00:15:54.348 "state": "online", 00:15:54.348 "raid_level": "raid5f", 00:15:54.348 "superblock": false, 00:15:54.348 "num_base_bdevs": 4, 00:15:54.348 "num_base_bdevs_discovered": 3, 00:15:54.348 "num_base_bdevs_operational": 3, 00:15:54.348 "base_bdevs_list": [ 00:15:54.348 { 00:15:54.348 "name": null, 00:15:54.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.348 "is_configured": false, 00:15:54.348 "data_offset": 0, 00:15:54.348 "data_size": 65536 00:15:54.348 }, 00:15:54.348 { 00:15:54.348 "name": "BaseBdev2", 00:15:54.348 "uuid": "29905f56-0689-54ad-9d97-874b857aa8ab", 00:15:54.348 "is_configured": true, 00:15:54.348 "data_offset": 0, 00:15:54.348 "data_size": 65536 00:15:54.348 }, 00:15:54.348 { 00:15:54.348 "name": "BaseBdev3", 00:15:54.348 "uuid": "5e649475-0764-5ddd-ba47-ed543c9c485f", 00:15:54.348 "is_configured": true, 00:15:54.348 "data_offset": 0, 00:15:54.348 "data_size": 65536 00:15:54.348 }, 00:15:54.348 { 00:15:54.348 "name": "BaseBdev4", 00:15:54.348 "uuid": "2d7de38d-55a5-558e-a679-7eb10d900fdf", 00:15:54.348 "is_configured": true, 00:15:54.348 "data_offset": 0, 00:15:54.348 "data_size": 65536 00:15:54.348 } 00:15:54.348 ] 00:15:54.348 }' 00:15:54.348 03:24:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.348 03:24:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.607 03:24:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:54.607 03:24:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.607 03:24:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:54.607 03:24:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:54.607 03:24:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.607 03:24:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.607 03:24:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.607 03:24:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.607 03:24:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.607 03:24:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.867 03:24:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.867 "name": "raid_bdev1", 00:15:54.867 "uuid": "ddabc5fb-2a1b-4561-9d35-ac2cb9e4be9c", 00:15:54.867 "strip_size_kb": 64, 00:15:54.867 "state": "online", 00:15:54.867 "raid_level": "raid5f", 00:15:54.867 "superblock": false, 00:15:54.867 "num_base_bdevs": 4, 00:15:54.867 "num_base_bdevs_discovered": 3, 00:15:54.867 "num_base_bdevs_operational": 3, 00:15:54.867 "base_bdevs_list": [ 00:15:54.867 { 00:15:54.867 "name": null, 00:15:54.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.867 "is_configured": false, 00:15:54.867 "data_offset": 0, 00:15:54.867 "data_size": 65536 00:15:54.867 }, 00:15:54.867 { 00:15:54.867 "name": "BaseBdev2", 00:15:54.867 "uuid": "29905f56-0689-54ad-9d97-874b857aa8ab", 00:15:54.867 "is_configured": true, 00:15:54.867 "data_offset": 0, 00:15:54.867 "data_size": 65536 00:15:54.867 }, 00:15:54.867 { 00:15:54.867 "name": "BaseBdev3", 00:15:54.867 "uuid": "5e649475-0764-5ddd-ba47-ed543c9c485f", 00:15:54.867 "is_configured": true, 00:15:54.867 "data_offset": 0, 00:15:54.867 "data_size": 65536 00:15:54.867 }, 00:15:54.867 { 00:15:54.867 "name": "BaseBdev4", 00:15:54.867 "uuid": "2d7de38d-55a5-558e-a679-7eb10d900fdf", 00:15:54.867 "is_configured": true, 00:15:54.867 "data_offset": 0, 00:15:54.867 "data_size": 65536 00:15:54.867 } 00:15:54.867 ] 00:15:54.867 }' 00:15:54.867 03:24:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.867 03:24:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:54.867 03:24:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.867 03:24:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:54.867 03:24:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:54.867 03:24:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.867 03:24:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.867 [2024-11-21 03:24:42.256040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:54.867 [2024-11-21 03:24:42.260059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002bc30 00:15:54.867 03:24:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.867 03:24:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:54.867 [2024-11-21 03:24:42.262297] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:55.805 03:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:55.805 03:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.805 03:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:55.805 03:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:55.805 03:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.805 03:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.805 03:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.805 03:24:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.805 03:24:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.805 03:24:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.805 03:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.805 "name": "raid_bdev1", 00:15:55.805 "uuid": "ddabc5fb-2a1b-4561-9d35-ac2cb9e4be9c", 00:15:55.805 "strip_size_kb": 64, 00:15:55.805 "state": "online", 00:15:55.805 "raid_level": "raid5f", 00:15:55.805 "superblock": false, 00:15:55.805 "num_base_bdevs": 4, 00:15:55.805 "num_base_bdevs_discovered": 4, 00:15:55.805 "num_base_bdevs_operational": 4, 00:15:55.805 "process": { 00:15:55.805 "type": "rebuild", 00:15:55.805 "target": "spare", 00:15:55.805 "progress": { 00:15:55.805 "blocks": 19200, 00:15:55.805 "percent": 9 00:15:55.805 } 00:15:55.805 }, 00:15:55.805 "base_bdevs_list": [ 00:15:55.805 { 00:15:55.805 "name": "spare", 00:15:55.805 "uuid": "a234b911-e1a0-5208-93df-e2b3481a8ce5", 00:15:55.805 "is_configured": true, 00:15:55.805 "data_offset": 0, 00:15:55.805 "data_size": 65536 00:15:55.805 }, 00:15:55.805 { 00:15:55.805 "name": "BaseBdev2", 00:15:55.805 "uuid": "29905f56-0689-54ad-9d97-874b857aa8ab", 00:15:55.805 "is_configured": true, 00:15:55.805 "data_offset": 0, 00:15:55.805 "data_size": 65536 00:15:55.805 }, 00:15:55.805 { 00:15:55.805 "name": "BaseBdev3", 00:15:55.805 "uuid": "5e649475-0764-5ddd-ba47-ed543c9c485f", 00:15:55.805 "is_configured": true, 00:15:55.805 "data_offset": 0, 00:15:55.805 "data_size": 65536 00:15:55.805 }, 00:15:55.805 { 00:15:55.805 "name": "BaseBdev4", 00:15:55.805 "uuid": "2d7de38d-55a5-558e-a679-7eb10d900fdf", 00:15:55.805 "is_configured": true, 00:15:55.805 "data_offset": 0, 00:15:55.805 "data_size": 65536 00:15:55.805 } 00:15:55.805 ] 00:15:55.805 }' 00:15:55.805 03:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.065 03:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:56.065 03:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.065 03:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:56.065 03:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:56.065 03:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:56.065 03:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:56.065 03:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=518 00:15:56.065 03:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:56.065 03:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:56.065 03:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.065 03:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:56.065 03:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:56.065 03:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.065 03:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.065 03:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.065 03:24:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.065 03:24:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.065 03:24:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.065 03:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.065 "name": "raid_bdev1", 00:15:56.065 "uuid": "ddabc5fb-2a1b-4561-9d35-ac2cb9e4be9c", 00:15:56.065 "strip_size_kb": 64, 00:15:56.065 "state": "online", 00:15:56.065 "raid_level": "raid5f", 00:15:56.065 "superblock": false, 00:15:56.065 "num_base_bdevs": 4, 00:15:56.065 "num_base_bdevs_discovered": 4, 00:15:56.065 "num_base_bdevs_operational": 4, 00:15:56.065 "process": { 00:15:56.065 "type": "rebuild", 00:15:56.065 "target": "spare", 00:15:56.065 "progress": { 00:15:56.065 "blocks": 21120, 00:15:56.065 "percent": 10 00:15:56.065 } 00:15:56.065 }, 00:15:56.065 "base_bdevs_list": [ 00:15:56.065 { 00:15:56.065 "name": "spare", 00:15:56.065 "uuid": "a234b911-e1a0-5208-93df-e2b3481a8ce5", 00:15:56.065 "is_configured": true, 00:15:56.065 "data_offset": 0, 00:15:56.065 "data_size": 65536 00:15:56.065 }, 00:15:56.065 { 00:15:56.065 "name": "BaseBdev2", 00:15:56.065 "uuid": "29905f56-0689-54ad-9d97-874b857aa8ab", 00:15:56.065 "is_configured": true, 00:15:56.065 "data_offset": 0, 00:15:56.065 "data_size": 65536 00:15:56.065 }, 00:15:56.065 { 00:15:56.065 "name": "BaseBdev3", 00:15:56.065 "uuid": "5e649475-0764-5ddd-ba47-ed543c9c485f", 00:15:56.065 "is_configured": true, 00:15:56.065 "data_offset": 0, 00:15:56.065 "data_size": 65536 00:15:56.065 }, 00:15:56.065 { 00:15:56.065 "name": "BaseBdev4", 00:15:56.065 "uuid": "2d7de38d-55a5-558e-a679-7eb10d900fdf", 00:15:56.065 "is_configured": true, 00:15:56.065 "data_offset": 0, 00:15:56.065 "data_size": 65536 00:15:56.065 } 00:15:56.065 ] 00:15:56.065 }' 00:15:56.065 03:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.065 03:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:56.065 03:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.065 03:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:56.065 03:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:57.004 03:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:57.004 03:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:57.004 03:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.004 03:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:57.004 03:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:57.004 03:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.005 03:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.005 03:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.005 03:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.005 03:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.005 03:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.264 03:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.264 "name": "raid_bdev1", 00:15:57.264 "uuid": "ddabc5fb-2a1b-4561-9d35-ac2cb9e4be9c", 00:15:57.264 "strip_size_kb": 64, 00:15:57.264 "state": "online", 00:15:57.264 "raid_level": "raid5f", 00:15:57.265 "superblock": false, 00:15:57.265 "num_base_bdevs": 4, 00:15:57.265 "num_base_bdevs_discovered": 4, 00:15:57.265 "num_base_bdevs_operational": 4, 00:15:57.265 "process": { 00:15:57.265 "type": "rebuild", 00:15:57.265 "target": "spare", 00:15:57.265 "progress": { 00:15:57.265 "blocks": 42240, 00:15:57.265 "percent": 21 00:15:57.265 } 00:15:57.265 }, 00:15:57.265 "base_bdevs_list": [ 00:15:57.265 { 00:15:57.265 "name": "spare", 00:15:57.265 "uuid": "a234b911-e1a0-5208-93df-e2b3481a8ce5", 00:15:57.265 "is_configured": true, 00:15:57.265 "data_offset": 0, 00:15:57.265 "data_size": 65536 00:15:57.265 }, 00:15:57.265 { 00:15:57.265 "name": "BaseBdev2", 00:15:57.265 "uuid": "29905f56-0689-54ad-9d97-874b857aa8ab", 00:15:57.265 "is_configured": true, 00:15:57.265 "data_offset": 0, 00:15:57.265 "data_size": 65536 00:15:57.265 }, 00:15:57.265 { 00:15:57.265 "name": "BaseBdev3", 00:15:57.265 "uuid": "5e649475-0764-5ddd-ba47-ed543c9c485f", 00:15:57.265 "is_configured": true, 00:15:57.265 "data_offset": 0, 00:15:57.265 "data_size": 65536 00:15:57.265 }, 00:15:57.265 { 00:15:57.265 "name": "BaseBdev4", 00:15:57.265 "uuid": "2d7de38d-55a5-558e-a679-7eb10d900fdf", 00:15:57.265 "is_configured": true, 00:15:57.265 "data_offset": 0, 00:15:57.265 "data_size": 65536 00:15:57.265 } 00:15:57.265 ] 00:15:57.265 }' 00:15:57.265 03:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.265 03:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:57.265 03:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.265 03:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:57.265 03:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:58.202 03:24:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:58.202 03:24:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.202 03:24:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.202 03:24:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.202 03:24:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.202 03:24:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.202 03:24:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.202 03:24:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.202 03:24:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.202 03:24:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.202 03:24:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.202 03:24:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.202 "name": "raid_bdev1", 00:15:58.202 "uuid": "ddabc5fb-2a1b-4561-9d35-ac2cb9e4be9c", 00:15:58.202 "strip_size_kb": 64, 00:15:58.202 "state": "online", 00:15:58.202 "raid_level": "raid5f", 00:15:58.202 "superblock": false, 00:15:58.202 "num_base_bdevs": 4, 00:15:58.202 "num_base_bdevs_discovered": 4, 00:15:58.202 "num_base_bdevs_operational": 4, 00:15:58.202 "process": { 00:15:58.202 "type": "rebuild", 00:15:58.202 "target": "spare", 00:15:58.202 "progress": { 00:15:58.202 "blocks": 65280, 00:15:58.202 "percent": 33 00:15:58.202 } 00:15:58.202 }, 00:15:58.202 "base_bdevs_list": [ 00:15:58.202 { 00:15:58.202 "name": "spare", 00:15:58.202 "uuid": "a234b911-e1a0-5208-93df-e2b3481a8ce5", 00:15:58.202 "is_configured": true, 00:15:58.202 "data_offset": 0, 00:15:58.202 "data_size": 65536 00:15:58.202 }, 00:15:58.202 { 00:15:58.202 "name": "BaseBdev2", 00:15:58.203 "uuid": "29905f56-0689-54ad-9d97-874b857aa8ab", 00:15:58.203 "is_configured": true, 00:15:58.203 "data_offset": 0, 00:15:58.203 "data_size": 65536 00:15:58.203 }, 00:15:58.203 { 00:15:58.203 "name": "BaseBdev3", 00:15:58.203 "uuid": "5e649475-0764-5ddd-ba47-ed543c9c485f", 00:15:58.203 "is_configured": true, 00:15:58.203 "data_offset": 0, 00:15:58.203 "data_size": 65536 00:15:58.203 }, 00:15:58.203 { 00:15:58.203 "name": "BaseBdev4", 00:15:58.203 "uuid": "2d7de38d-55a5-558e-a679-7eb10d900fdf", 00:15:58.203 "is_configured": true, 00:15:58.203 "data_offset": 0, 00:15:58.203 "data_size": 65536 00:15:58.203 } 00:15:58.203 ] 00:15:58.203 }' 00:15:58.203 03:24:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.461 03:24:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.461 03:24:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.461 03:24:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.461 03:24:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:59.398 03:24:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:59.398 03:24:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:59.398 03:24:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.398 03:24:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:59.398 03:24:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:59.398 03:24:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.398 03:24:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.398 03:24:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.398 03:24:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.398 03:24:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.398 03:24:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.398 03:24:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.398 "name": "raid_bdev1", 00:15:59.398 "uuid": "ddabc5fb-2a1b-4561-9d35-ac2cb9e4be9c", 00:15:59.398 "strip_size_kb": 64, 00:15:59.398 "state": "online", 00:15:59.398 "raid_level": "raid5f", 00:15:59.398 "superblock": false, 00:15:59.398 "num_base_bdevs": 4, 00:15:59.398 "num_base_bdevs_discovered": 4, 00:15:59.398 "num_base_bdevs_operational": 4, 00:15:59.398 "process": { 00:15:59.398 "type": "rebuild", 00:15:59.398 "target": "spare", 00:15:59.398 "progress": { 00:15:59.398 "blocks": 86400, 00:15:59.398 "percent": 43 00:15:59.398 } 00:15:59.398 }, 00:15:59.398 "base_bdevs_list": [ 00:15:59.398 { 00:15:59.398 "name": "spare", 00:15:59.398 "uuid": "a234b911-e1a0-5208-93df-e2b3481a8ce5", 00:15:59.398 "is_configured": true, 00:15:59.398 "data_offset": 0, 00:15:59.398 "data_size": 65536 00:15:59.398 }, 00:15:59.398 { 00:15:59.398 "name": "BaseBdev2", 00:15:59.398 "uuid": "29905f56-0689-54ad-9d97-874b857aa8ab", 00:15:59.398 "is_configured": true, 00:15:59.398 "data_offset": 0, 00:15:59.398 "data_size": 65536 00:15:59.398 }, 00:15:59.398 { 00:15:59.398 "name": "BaseBdev3", 00:15:59.398 "uuid": "5e649475-0764-5ddd-ba47-ed543c9c485f", 00:15:59.398 "is_configured": true, 00:15:59.398 "data_offset": 0, 00:15:59.398 "data_size": 65536 00:15:59.398 }, 00:15:59.398 { 00:15:59.398 "name": "BaseBdev4", 00:15:59.399 "uuid": "2d7de38d-55a5-558e-a679-7eb10d900fdf", 00:15:59.399 "is_configured": true, 00:15:59.399 "data_offset": 0, 00:15:59.399 "data_size": 65536 00:15:59.399 } 00:15:59.399 ] 00:15:59.399 }' 00:15:59.399 03:24:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.399 03:24:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:59.399 03:24:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.658 03:24:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:59.658 03:24:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:00.597 03:24:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:00.597 03:24:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.597 03:24:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.597 03:24:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.597 03:24:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.597 03:24:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.597 03:24:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.597 03:24:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.597 03:24:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.597 03:24:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.597 03:24:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.597 03:24:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.597 "name": "raid_bdev1", 00:16:00.597 "uuid": "ddabc5fb-2a1b-4561-9d35-ac2cb9e4be9c", 00:16:00.597 "strip_size_kb": 64, 00:16:00.597 "state": "online", 00:16:00.597 "raid_level": "raid5f", 00:16:00.597 "superblock": false, 00:16:00.597 "num_base_bdevs": 4, 00:16:00.597 "num_base_bdevs_discovered": 4, 00:16:00.597 "num_base_bdevs_operational": 4, 00:16:00.597 "process": { 00:16:00.597 "type": "rebuild", 00:16:00.597 "target": "spare", 00:16:00.597 "progress": { 00:16:00.597 "blocks": 109440, 00:16:00.597 "percent": 55 00:16:00.597 } 00:16:00.597 }, 00:16:00.597 "base_bdevs_list": [ 00:16:00.597 { 00:16:00.597 "name": "spare", 00:16:00.597 "uuid": "a234b911-e1a0-5208-93df-e2b3481a8ce5", 00:16:00.597 "is_configured": true, 00:16:00.597 "data_offset": 0, 00:16:00.597 "data_size": 65536 00:16:00.597 }, 00:16:00.597 { 00:16:00.597 "name": "BaseBdev2", 00:16:00.597 "uuid": "29905f56-0689-54ad-9d97-874b857aa8ab", 00:16:00.597 "is_configured": true, 00:16:00.597 "data_offset": 0, 00:16:00.597 "data_size": 65536 00:16:00.597 }, 00:16:00.597 { 00:16:00.597 "name": "BaseBdev3", 00:16:00.597 "uuid": "5e649475-0764-5ddd-ba47-ed543c9c485f", 00:16:00.597 "is_configured": true, 00:16:00.597 "data_offset": 0, 00:16:00.597 "data_size": 65536 00:16:00.597 }, 00:16:00.597 { 00:16:00.597 "name": "BaseBdev4", 00:16:00.597 "uuid": "2d7de38d-55a5-558e-a679-7eb10d900fdf", 00:16:00.597 "is_configured": true, 00:16:00.597 "data_offset": 0, 00:16:00.597 "data_size": 65536 00:16:00.597 } 00:16:00.598 ] 00:16:00.598 }' 00:16:00.598 03:24:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.598 03:24:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:00.598 03:24:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.857 03:24:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.857 03:24:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:01.796 03:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:01.796 03:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.796 03:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.796 03:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.796 03:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.796 03:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.796 03:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.796 03:24:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.796 03:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.796 03:24:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.796 03:24:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.796 03:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.796 "name": "raid_bdev1", 00:16:01.796 "uuid": "ddabc5fb-2a1b-4561-9d35-ac2cb9e4be9c", 00:16:01.796 "strip_size_kb": 64, 00:16:01.796 "state": "online", 00:16:01.796 "raid_level": "raid5f", 00:16:01.796 "superblock": false, 00:16:01.796 "num_base_bdevs": 4, 00:16:01.796 "num_base_bdevs_discovered": 4, 00:16:01.796 "num_base_bdevs_operational": 4, 00:16:01.796 "process": { 00:16:01.796 "type": "rebuild", 00:16:01.796 "target": "spare", 00:16:01.796 "progress": { 00:16:01.796 "blocks": 130560, 00:16:01.796 "percent": 66 00:16:01.796 } 00:16:01.796 }, 00:16:01.796 "base_bdevs_list": [ 00:16:01.796 { 00:16:01.796 "name": "spare", 00:16:01.796 "uuid": "a234b911-e1a0-5208-93df-e2b3481a8ce5", 00:16:01.796 "is_configured": true, 00:16:01.796 "data_offset": 0, 00:16:01.796 "data_size": 65536 00:16:01.796 }, 00:16:01.796 { 00:16:01.796 "name": "BaseBdev2", 00:16:01.796 "uuid": "29905f56-0689-54ad-9d97-874b857aa8ab", 00:16:01.796 "is_configured": true, 00:16:01.796 "data_offset": 0, 00:16:01.796 "data_size": 65536 00:16:01.796 }, 00:16:01.796 { 00:16:01.796 "name": "BaseBdev3", 00:16:01.796 "uuid": "5e649475-0764-5ddd-ba47-ed543c9c485f", 00:16:01.796 "is_configured": true, 00:16:01.796 "data_offset": 0, 00:16:01.796 "data_size": 65536 00:16:01.796 }, 00:16:01.796 { 00:16:01.796 "name": "BaseBdev4", 00:16:01.796 "uuid": "2d7de38d-55a5-558e-a679-7eb10d900fdf", 00:16:01.796 "is_configured": true, 00:16:01.796 "data_offset": 0, 00:16:01.796 "data_size": 65536 00:16:01.796 } 00:16:01.796 ] 00:16:01.796 }' 00:16:01.796 03:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.796 03:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.796 03:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.796 03:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.796 03:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:03.210 03:24:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:03.210 03:24:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:03.210 03:24:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.210 03:24:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:03.210 03:24:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:03.210 03:24:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.210 03:24:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.210 03:24:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.210 03:24:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.210 03:24:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.210 03:24:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.210 03:24:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.210 "name": "raid_bdev1", 00:16:03.210 "uuid": "ddabc5fb-2a1b-4561-9d35-ac2cb9e4be9c", 00:16:03.210 "strip_size_kb": 64, 00:16:03.210 "state": "online", 00:16:03.210 "raid_level": "raid5f", 00:16:03.210 "superblock": false, 00:16:03.210 "num_base_bdevs": 4, 00:16:03.210 "num_base_bdevs_discovered": 4, 00:16:03.210 "num_base_bdevs_operational": 4, 00:16:03.210 "process": { 00:16:03.210 "type": "rebuild", 00:16:03.210 "target": "spare", 00:16:03.210 "progress": { 00:16:03.210 "blocks": 153600, 00:16:03.210 "percent": 78 00:16:03.210 } 00:16:03.210 }, 00:16:03.210 "base_bdevs_list": [ 00:16:03.210 { 00:16:03.210 "name": "spare", 00:16:03.210 "uuid": "a234b911-e1a0-5208-93df-e2b3481a8ce5", 00:16:03.210 "is_configured": true, 00:16:03.210 "data_offset": 0, 00:16:03.210 "data_size": 65536 00:16:03.210 }, 00:16:03.210 { 00:16:03.210 "name": "BaseBdev2", 00:16:03.210 "uuid": "29905f56-0689-54ad-9d97-874b857aa8ab", 00:16:03.210 "is_configured": true, 00:16:03.210 "data_offset": 0, 00:16:03.210 "data_size": 65536 00:16:03.210 }, 00:16:03.210 { 00:16:03.210 "name": "BaseBdev3", 00:16:03.210 "uuid": "5e649475-0764-5ddd-ba47-ed543c9c485f", 00:16:03.210 "is_configured": true, 00:16:03.210 "data_offset": 0, 00:16:03.210 "data_size": 65536 00:16:03.210 }, 00:16:03.210 { 00:16:03.210 "name": "BaseBdev4", 00:16:03.210 "uuid": "2d7de38d-55a5-558e-a679-7eb10d900fdf", 00:16:03.210 "is_configured": true, 00:16:03.210 "data_offset": 0, 00:16:03.210 "data_size": 65536 00:16:03.210 } 00:16:03.210 ] 00:16:03.210 }' 00:16:03.210 03:24:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.210 03:24:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.210 03:24:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.210 03:24:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.210 03:24:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:04.151 03:24:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:04.151 03:24:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.151 03:24:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.151 03:24:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.151 03:24:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.151 03:24:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.151 03:24:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.151 03:24:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.151 03:24:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.151 03:24:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.151 03:24:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.151 03:24:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.151 "name": "raid_bdev1", 00:16:04.151 "uuid": "ddabc5fb-2a1b-4561-9d35-ac2cb9e4be9c", 00:16:04.151 "strip_size_kb": 64, 00:16:04.151 "state": "online", 00:16:04.151 "raid_level": "raid5f", 00:16:04.151 "superblock": false, 00:16:04.151 "num_base_bdevs": 4, 00:16:04.151 "num_base_bdevs_discovered": 4, 00:16:04.151 "num_base_bdevs_operational": 4, 00:16:04.151 "process": { 00:16:04.151 "type": "rebuild", 00:16:04.151 "target": "spare", 00:16:04.151 "progress": { 00:16:04.151 "blocks": 174720, 00:16:04.151 "percent": 88 00:16:04.151 } 00:16:04.151 }, 00:16:04.151 "base_bdevs_list": [ 00:16:04.151 { 00:16:04.151 "name": "spare", 00:16:04.151 "uuid": "a234b911-e1a0-5208-93df-e2b3481a8ce5", 00:16:04.151 "is_configured": true, 00:16:04.151 "data_offset": 0, 00:16:04.151 "data_size": 65536 00:16:04.151 }, 00:16:04.151 { 00:16:04.151 "name": "BaseBdev2", 00:16:04.151 "uuid": "29905f56-0689-54ad-9d97-874b857aa8ab", 00:16:04.151 "is_configured": true, 00:16:04.151 "data_offset": 0, 00:16:04.151 "data_size": 65536 00:16:04.151 }, 00:16:04.151 { 00:16:04.151 "name": "BaseBdev3", 00:16:04.151 "uuid": "5e649475-0764-5ddd-ba47-ed543c9c485f", 00:16:04.151 "is_configured": true, 00:16:04.151 "data_offset": 0, 00:16:04.151 "data_size": 65536 00:16:04.151 }, 00:16:04.151 { 00:16:04.151 "name": "BaseBdev4", 00:16:04.151 "uuid": "2d7de38d-55a5-558e-a679-7eb10d900fdf", 00:16:04.151 "is_configured": true, 00:16:04.151 "data_offset": 0, 00:16:04.151 "data_size": 65536 00:16:04.151 } 00:16:04.151 ] 00:16:04.151 }' 00:16:04.151 03:24:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.151 03:24:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.151 03:24:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.151 03:24:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.151 03:24:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:05.092 [2024-11-21 03:24:52.622091] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:05.092 [2024-11-21 03:24:52.622151] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:05.092 [2024-11-21 03:24:52.622197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.092 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:05.092 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:05.092 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.092 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:05.092 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:05.092 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.092 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.092 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.092 03:24:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.092 03:24:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.352 03:24:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.352 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.352 "name": "raid_bdev1", 00:16:05.352 "uuid": "ddabc5fb-2a1b-4561-9d35-ac2cb9e4be9c", 00:16:05.352 "strip_size_kb": 64, 00:16:05.352 "state": "online", 00:16:05.352 "raid_level": "raid5f", 00:16:05.352 "superblock": false, 00:16:05.352 "num_base_bdevs": 4, 00:16:05.352 "num_base_bdevs_discovered": 4, 00:16:05.352 "num_base_bdevs_operational": 4, 00:16:05.352 "base_bdevs_list": [ 00:16:05.352 { 00:16:05.352 "name": "spare", 00:16:05.352 "uuid": "a234b911-e1a0-5208-93df-e2b3481a8ce5", 00:16:05.352 "is_configured": true, 00:16:05.352 "data_offset": 0, 00:16:05.352 "data_size": 65536 00:16:05.352 }, 00:16:05.352 { 00:16:05.352 "name": "BaseBdev2", 00:16:05.352 "uuid": "29905f56-0689-54ad-9d97-874b857aa8ab", 00:16:05.352 "is_configured": true, 00:16:05.352 "data_offset": 0, 00:16:05.352 "data_size": 65536 00:16:05.352 }, 00:16:05.352 { 00:16:05.352 "name": "BaseBdev3", 00:16:05.353 "uuid": "5e649475-0764-5ddd-ba47-ed543c9c485f", 00:16:05.353 "is_configured": true, 00:16:05.353 "data_offset": 0, 00:16:05.353 "data_size": 65536 00:16:05.353 }, 00:16:05.353 { 00:16:05.353 "name": "BaseBdev4", 00:16:05.353 "uuid": "2d7de38d-55a5-558e-a679-7eb10d900fdf", 00:16:05.353 "is_configured": true, 00:16:05.353 "data_offset": 0, 00:16:05.353 "data_size": 65536 00:16:05.353 } 00:16:05.353 ] 00:16:05.353 }' 00:16:05.353 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.353 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:05.353 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.353 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:05.353 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:05.353 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:05.353 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.353 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:05.353 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:05.353 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.353 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.353 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.353 03:24:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.353 03:24:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.353 03:24:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.353 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.353 "name": "raid_bdev1", 00:16:05.353 "uuid": "ddabc5fb-2a1b-4561-9d35-ac2cb9e4be9c", 00:16:05.353 "strip_size_kb": 64, 00:16:05.353 "state": "online", 00:16:05.353 "raid_level": "raid5f", 00:16:05.353 "superblock": false, 00:16:05.353 "num_base_bdevs": 4, 00:16:05.353 "num_base_bdevs_discovered": 4, 00:16:05.353 "num_base_bdevs_operational": 4, 00:16:05.353 "base_bdevs_list": [ 00:16:05.353 { 00:16:05.353 "name": "spare", 00:16:05.353 "uuid": "a234b911-e1a0-5208-93df-e2b3481a8ce5", 00:16:05.353 "is_configured": true, 00:16:05.353 "data_offset": 0, 00:16:05.353 "data_size": 65536 00:16:05.353 }, 00:16:05.353 { 00:16:05.353 "name": "BaseBdev2", 00:16:05.353 "uuid": "29905f56-0689-54ad-9d97-874b857aa8ab", 00:16:05.353 "is_configured": true, 00:16:05.353 "data_offset": 0, 00:16:05.353 "data_size": 65536 00:16:05.353 }, 00:16:05.353 { 00:16:05.353 "name": "BaseBdev3", 00:16:05.353 "uuid": "5e649475-0764-5ddd-ba47-ed543c9c485f", 00:16:05.353 "is_configured": true, 00:16:05.353 "data_offset": 0, 00:16:05.353 "data_size": 65536 00:16:05.353 }, 00:16:05.353 { 00:16:05.353 "name": "BaseBdev4", 00:16:05.353 "uuid": "2d7de38d-55a5-558e-a679-7eb10d900fdf", 00:16:05.353 "is_configured": true, 00:16:05.353 "data_offset": 0, 00:16:05.353 "data_size": 65536 00:16:05.353 } 00:16:05.353 ] 00:16:05.353 }' 00:16:05.353 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.353 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:05.353 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.613 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:05.613 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:05.613 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.613 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.613 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.613 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.613 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:05.613 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.613 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.613 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.613 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.613 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.613 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.613 03:24:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.613 03:24:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.613 03:24:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.613 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.613 "name": "raid_bdev1", 00:16:05.613 "uuid": "ddabc5fb-2a1b-4561-9d35-ac2cb9e4be9c", 00:16:05.613 "strip_size_kb": 64, 00:16:05.613 "state": "online", 00:16:05.613 "raid_level": "raid5f", 00:16:05.613 "superblock": false, 00:16:05.613 "num_base_bdevs": 4, 00:16:05.613 "num_base_bdevs_discovered": 4, 00:16:05.613 "num_base_bdevs_operational": 4, 00:16:05.613 "base_bdevs_list": [ 00:16:05.613 { 00:16:05.613 "name": "spare", 00:16:05.613 "uuid": "a234b911-e1a0-5208-93df-e2b3481a8ce5", 00:16:05.613 "is_configured": true, 00:16:05.613 "data_offset": 0, 00:16:05.613 "data_size": 65536 00:16:05.613 }, 00:16:05.613 { 00:16:05.613 "name": "BaseBdev2", 00:16:05.613 "uuid": "29905f56-0689-54ad-9d97-874b857aa8ab", 00:16:05.613 "is_configured": true, 00:16:05.613 "data_offset": 0, 00:16:05.613 "data_size": 65536 00:16:05.613 }, 00:16:05.613 { 00:16:05.613 "name": "BaseBdev3", 00:16:05.613 "uuid": "5e649475-0764-5ddd-ba47-ed543c9c485f", 00:16:05.613 "is_configured": true, 00:16:05.613 "data_offset": 0, 00:16:05.613 "data_size": 65536 00:16:05.613 }, 00:16:05.613 { 00:16:05.613 "name": "BaseBdev4", 00:16:05.613 "uuid": "2d7de38d-55a5-558e-a679-7eb10d900fdf", 00:16:05.613 "is_configured": true, 00:16:05.613 "data_offset": 0, 00:16:05.613 "data_size": 65536 00:16:05.613 } 00:16:05.613 ] 00:16:05.613 }' 00:16:05.613 03:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.613 03:24:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.874 03:24:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:05.874 03:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.874 03:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.874 [2024-11-21 03:24:53.379896] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:05.874 [2024-11-21 03:24:53.379975] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:05.874 [2024-11-21 03:24:53.380085] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.874 [2024-11-21 03:24:53.380183] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.874 [2024-11-21 03:24:53.380192] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:05.874 03:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.874 03:24:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.874 03:24:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:05.874 03:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.874 03:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.874 03:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.874 03:24:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:05.874 03:24:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:05.874 03:24:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:05.874 03:24:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:05.874 03:24:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:05.874 03:24:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:05.874 03:24:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:05.874 03:24:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:05.874 03:24:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:05.874 03:24:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:05.874 03:24:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:05.874 03:24:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:06.134 03:24:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:06.134 /dev/nbd0 00:16:06.135 03:24:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:06.135 03:24:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:06.135 03:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:06.135 03:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:06.135 03:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:06.135 03:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:06.135 03:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:06.135 03:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:06.135 03:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:06.135 03:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:06.135 03:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:06.135 1+0 records in 00:16:06.135 1+0 records out 00:16:06.135 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319083 s, 12.8 MB/s 00:16:06.135 03:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.135 03:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:06.135 03:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.135 03:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:06.135 03:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:06.135 03:24:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:06.135 03:24:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:06.135 03:24:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:06.395 /dev/nbd1 00:16:06.395 03:24:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:06.395 03:24:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:06.395 03:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:06.395 03:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:06.395 03:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:06.395 03:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:06.395 03:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:06.395 03:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:06.395 03:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:06.395 03:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:06.395 03:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:06.395 1+0 records in 00:16:06.395 1+0 records out 00:16:06.395 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372151 s, 11.0 MB/s 00:16:06.395 03:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.395 03:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:06.395 03:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.395 03:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:06.395 03:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:06.395 03:24:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:06.395 03:24:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:06.395 03:24:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:06.655 03:24:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:06.655 03:24:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:06.655 03:24:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:06.655 03:24:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:06.655 03:24:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:06.655 03:24:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:06.655 03:24:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:06.655 03:24:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:06.655 03:24:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:06.655 03:24:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:06.655 03:24:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:06.655 03:24:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:06.655 03:24:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:06.655 03:24:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:06.655 03:24:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:06.655 03:24:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:06.655 03:24:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:06.915 03:24:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:06.915 03:24:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:06.915 03:24:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:06.915 03:24:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:06.915 03:24:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:06.915 03:24:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:06.915 03:24:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:06.915 03:24:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:06.915 03:24:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:06.915 03:24:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 97073 00:16:06.915 03:24:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 97073 ']' 00:16:06.915 03:24:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 97073 00:16:06.915 03:24:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:06.915 03:24:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:06.915 03:24:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97073 00:16:06.915 03:24:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:06.915 03:24:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:06.915 03:24:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97073' 00:16:06.915 killing process with pid 97073 00:16:06.915 Received shutdown signal, test time was about 60.000000 seconds 00:16:06.915 00:16:06.915 Latency(us) 00:16:06.915 [2024-11-21T03:24:54.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.915 [2024-11-21T03:24:54.481Z] =================================================================================================================== 00:16:06.915 [2024-11-21T03:24:54.481Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:06.915 03:24:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 97073 00:16:06.915 [2024-11-21 03:24:54.472448] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:06.915 03:24:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 97073 00:16:07.176 [2024-11-21 03:24:54.522924] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:07.176 03:24:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:07.176 00:16:07.176 real 0m17.352s 00:16:07.176 user 0m21.154s 00:16:07.176 sys 0m2.348s 00:16:07.176 ************************************ 00:16:07.176 END TEST raid5f_rebuild_test 00:16:07.176 ************************************ 00:16:07.176 03:24:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:07.176 03:24:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.436 03:24:54 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:16:07.436 03:24:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:07.436 03:24:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:07.436 03:24:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:07.436 ************************************ 00:16:07.436 START TEST raid5f_rebuild_test_sb 00:16:07.436 ************************************ 00:16:07.436 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:16:07.436 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:07.436 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:07.436 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:07.436 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:07.436 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:07.436 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:07.436 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:07.436 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:07.436 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:07.436 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:07.436 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:07.436 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:07.436 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:07.436 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:07.436 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:07.436 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:07.436 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:07.436 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:07.436 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:07.436 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:07.436 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:07.436 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:07.436 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:07.436 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:07.436 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:07.436 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:07.436 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:07.437 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:07.437 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:07.437 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:07.437 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:07.437 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:07.437 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=97555 00:16:07.437 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 97555 00:16:07.437 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:07.437 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 97555 ']' 00:16:07.437 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.437 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:07.437 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.437 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:07.437 03:24:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.437 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:07.437 Zero copy mechanism will not be used. 00:16:07.437 [2024-11-21 03:24:54.916228] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:16:07.437 [2024-11-21 03:24:54.916364] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97555 ] 00:16:07.697 [2024-11-21 03:24:55.055956] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:07.697 [2024-11-21 03:24:55.094094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.697 [2024-11-21 03:24:55.121161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.697 [2024-11-21 03:24:55.164130] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:07.697 [2024-11-21 03:24:55.164185] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:08.268 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:08.268 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:08.268 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:08.268 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:08.268 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.268 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.268 BaseBdev1_malloc 00:16:08.268 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.268 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:08.268 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.268 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.268 [2024-11-21 03:24:55.755617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:08.268 [2024-11-21 03:24:55.755699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.269 [2024-11-21 03:24:55.755727] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:08.269 [2024-11-21 03:24:55.755747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.269 [2024-11-21 03:24:55.757857] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.269 [2024-11-21 03:24:55.757898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:08.269 BaseBdev1 00:16:08.269 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.269 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:08.269 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:08.269 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.269 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.269 BaseBdev2_malloc 00:16:08.269 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.269 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:08.269 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.269 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.269 [2024-11-21 03:24:55.784278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:08.269 [2024-11-21 03:24:55.784331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.269 [2024-11-21 03:24:55.784348] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:08.269 [2024-11-21 03:24:55.784358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.269 [2024-11-21 03:24:55.786319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.269 [2024-11-21 03:24:55.786413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:08.269 BaseBdev2 00:16:08.269 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.269 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:08.269 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:08.269 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.269 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.269 BaseBdev3_malloc 00:16:08.269 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.269 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:08.269 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.269 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.269 [2024-11-21 03:24:55.812926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:08.269 [2024-11-21 03:24:55.813027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.269 [2024-11-21 03:24:55.813054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:08.269 [2024-11-21 03:24:55.813069] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.269 [2024-11-21 03:24:55.815044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.269 [2024-11-21 03:24:55.815082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:08.269 BaseBdev3 00:16:08.269 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.269 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:08.269 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:08.269 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.269 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.529 BaseBdev4_malloc 00:16:08.529 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.529 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:08.529 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.529 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.529 [2024-11-21 03:24:55.850412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:08.529 [2024-11-21 03:24:55.850503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.529 [2024-11-21 03:24:55.850529] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:08.529 [2024-11-21 03:24:55.850542] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.529 [2024-11-21 03:24:55.852613] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.530 [2024-11-21 03:24:55.852652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:08.530 BaseBdev4 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.530 spare_malloc 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.530 spare_delay 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.530 [2024-11-21 03:24:55.891093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:08.530 [2024-11-21 03:24:55.891148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.530 [2024-11-21 03:24:55.891170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:08.530 [2024-11-21 03:24:55.891183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.530 [2024-11-21 03:24:55.893183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.530 [2024-11-21 03:24:55.893279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:08.530 spare 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.530 [2024-11-21 03:24:55.903194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:08.530 [2024-11-21 03:24:55.904992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:08.530 [2024-11-21 03:24:55.905066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:08.530 [2024-11-21 03:24:55.905106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:08.530 [2024-11-21 03:24:55.905265] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:08.530 [2024-11-21 03:24:55.905284] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:08.530 [2024-11-21 03:24:55.905531] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:08.530 [2024-11-21 03:24:55.905961] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:08.530 [2024-11-21 03:24:55.905971] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:08.530 [2024-11-21 03:24:55.906110] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.530 "name": "raid_bdev1", 00:16:08.530 "uuid": "aa961aa2-d1e1-41d3-b692-bfdc1a4307a0", 00:16:08.530 "strip_size_kb": 64, 00:16:08.530 "state": "online", 00:16:08.530 "raid_level": "raid5f", 00:16:08.530 "superblock": true, 00:16:08.530 "num_base_bdevs": 4, 00:16:08.530 "num_base_bdevs_discovered": 4, 00:16:08.530 "num_base_bdevs_operational": 4, 00:16:08.530 "base_bdevs_list": [ 00:16:08.530 { 00:16:08.530 "name": "BaseBdev1", 00:16:08.530 "uuid": "8ee73358-325d-5e5c-954e-17cc9bdb49f8", 00:16:08.530 "is_configured": true, 00:16:08.530 "data_offset": 2048, 00:16:08.530 "data_size": 63488 00:16:08.530 }, 00:16:08.530 { 00:16:08.530 "name": "BaseBdev2", 00:16:08.530 "uuid": "bcf36f98-1ce1-5e56-a148-7c6f91e9d728", 00:16:08.530 "is_configured": true, 00:16:08.530 "data_offset": 2048, 00:16:08.530 "data_size": 63488 00:16:08.530 }, 00:16:08.530 { 00:16:08.530 "name": "BaseBdev3", 00:16:08.530 "uuid": "7db79f0c-54cb-5b2b-b2d0-f4470888121f", 00:16:08.530 "is_configured": true, 00:16:08.530 "data_offset": 2048, 00:16:08.530 "data_size": 63488 00:16:08.530 }, 00:16:08.530 { 00:16:08.530 "name": "BaseBdev4", 00:16:08.530 "uuid": "f721faf8-287d-51fb-8b81-ffb17a57fd12", 00:16:08.530 "is_configured": true, 00:16:08.530 "data_offset": 2048, 00:16:08.530 "data_size": 63488 00:16:08.530 } 00:16:08.530 ] 00:16:08.530 }' 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.530 03:24:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.793 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:08.793 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:08.793 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.793 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.793 [2024-11-21 03:24:56.340112] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:09.053 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.053 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:16:09.053 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.053 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:09.053 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.053 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.053 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.053 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:09.053 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:09.053 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:09.053 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:09.053 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:09.053 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:09.053 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:09.053 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:09.053 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:09.053 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:09.053 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:09.053 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:09.053 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:09.053 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:09.053 [2024-11-21 03:24:56.584092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:16:09.053 /dev/nbd0 00:16:09.313 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:09.313 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:09.313 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:09.313 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:09.313 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:09.313 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:09.313 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:09.314 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:09.314 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:09.314 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:09.314 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:09.314 1+0 records in 00:16:09.314 1+0 records out 00:16:09.314 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413279 s, 9.9 MB/s 00:16:09.314 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:09.314 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:09.314 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:09.314 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:09.314 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:09.314 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:09.314 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:09.314 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:09.314 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:09.314 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:09.314 03:24:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:16:09.574 496+0 records in 00:16:09.574 496+0 records out 00:16:09.574 97517568 bytes (98 MB, 93 MiB) copied, 0.384868 s, 253 MB/s 00:16:09.574 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:09.574 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:09.574 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:09.574 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:09.574 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:09.574 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:09.574 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:09.834 [2024-11-21 03:24:57.280124] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.834 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:09.834 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:09.834 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:09.834 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:09.834 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:09.834 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:09.834 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:09.834 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:09.834 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:09.834 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.834 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.834 [2024-11-21 03:24:57.309777] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:09.834 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.834 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:09.834 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.834 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.834 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.834 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.834 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:09.834 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.834 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.834 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.834 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.834 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.834 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.834 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.834 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.834 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.834 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.834 "name": "raid_bdev1", 00:16:09.834 "uuid": "aa961aa2-d1e1-41d3-b692-bfdc1a4307a0", 00:16:09.834 "strip_size_kb": 64, 00:16:09.834 "state": "online", 00:16:09.834 "raid_level": "raid5f", 00:16:09.835 "superblock": true, 00:16:09.835 "num_base_bdevs": 4, 00:16:09.835 "num_base_bdevs_discovered": 3, 00:16:09.835 "num_base_bdevs_operational": 3, 00:16:09.835 "base_bdevs_list": [ 00:16:09.835 { 00:16:09.835 "name": null, 00:16:09.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.835 "is_configured": false, 00:16:09.835 "data_offset": 0, 00:16:09.835 "data_size": 63488 00:16:09.835 }, 00:16:09.835 { 00:16:09.835 "name": "BaseBdev2", 00:16:09.835 "uuid": "bcf36f98-1ce1-5e56-a148-7c6f91e9d728", 00:16:09.835 "is_configured": true, 00:16:09.835 "data_offset": 2048, 00:16:09.835 "data_size": 63488 00:16:09.835 }, 00:16:09.835 { 00:16:09.835 "name": "BaseBdev3", 00:16:09.835 "uuid": "7db79f0c-54cb-5b2b-b2d0-f4470888121f", 00:16:09.835 "is_configured": true, 00:16:09.835 "data_offset": 2048, 00:16:09.835 "data_size": 63488 00:16:09.835 }, 00:16:09.835 { 00:16:09.835 "name": "BaseBdev4", 00:16:09.835 "uuid": "f721faf8-287d-51fb-8b81-ffb17a57fd12", 00:16:09.835 "is_configured": true, 00:16:09.835 "data_offset": 2048, 00:16:09.835 "data_size": 63488 00:16:09.835 } 00:16:09.835 ] 00:16:09.835 }' 00:16:09.835 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.835 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.405 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:10.405 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.405 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.405 [2024-11-21 03:24:57.797910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:10.405 [2024-11-21 03:24:57.802106] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ae60 00:16:10.405 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.405 03:24:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:10.405 [2024-11-21 03:24:57.804317] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:11.346 03:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:11.346 03:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.346 03:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:11.346 03:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:11.346 03:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.346 03:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.346 03:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.346 03:24:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.346 03:24:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.346 03:24:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.346 03:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.346 "name": "raid_bdev1", 00:16:11.346 "uuid": "aa961aa2-d1e1-41d3-b692-bfdc1a4307a0", 00:16:11.346 "strip_size_kb": 64, 00:16:11.346 "state": "online", 00:16:11.346 "raid_level": "raid5f", 00:16:11.346 "superblock": true, 00:16:11.346 "num_base_bdevs": 4, 00:16:11.346 "num_base_bdevs_discovered": 4, 00:16:11.346 "num_base_bdevs_operational": 4, 00:16:11.346 "process": { 00:16:11.346 "type": "rebuild", 00:16:11.346 "target": "spare", 00:16:11.346 "progress": { 00:16:11.346 "blocks": 19200, 00:16:11.346 "percent": 10 00:16:11.346 } 00:16:11.346 }, 00:16:11.346 "base_bdevs_list": [ 00:16:11.346 { 00:16:11.346 "name": "spare", 00:16:11.346 "uuid": "170963cd-a2bc-5c82-8ef5-cbae7b9ba0c3", 00:16:11.346 "is_configured": true, 00:16:11.346 "data_offset": 2048, 00:16:11.346 "data_size": 63488 00:16:11.346 }, 00:16:11.346 { 00:16:11.346 "name": "BaseBdev2", 00:16:11.346 "uuid": "bcf36f98-1ce1-5e56-a148-7c6f91e9d728", 00:16:11.346 "is_configured": true, 00:16:11.346 "data_offset": 2048, 00:16:11.346 "data_size": 63488 00:16:11.346 }, 00:16:11.346 { 00:16:11.346 "name": "BaseBdev3", 00:16:11.346 "uuid": "7db79f0c-54cb-5b2b-b2d0-f4470888121f", 00:16:11.346 "is_configured": true, 00:16:11.346 "data_offset": 2048, 00:16:11.346 "data_size": 63488 00:16:11.346 }, 00:16:11.346 { 00:16:11.346 "name": "BaseBdev4", 00:16:11.346 "uuid": "f721faf8-287d-51fb-8b81-ffb17a57fd12", 00:16:11.346 "is_configured": true, 00:16:11.346 "data_offset": 2048, 00:16:11.346 "data_size": 63488 00:16:11.346 } 00:16:11.346 ] 00:16:11.346 }' 00:16:11.346 03:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.606 03:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:11.606 03:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.607 03:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.607 03:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:11.607 03:24:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.607 03:24:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.607 [2024-11-21 03:24:58.971299] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:11.607 [2024-11-21 03:24:59.011946] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:11.607 [2024-11-21 03:24:59.012082] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.607 [2024-11-21 03:24:59.012137] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:11.607 [2024-11-21 03:24:59.012180] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:11.607 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.607 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:11.607 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.607 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.607 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:11.607 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.607 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:11.607 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.607 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.607 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.607 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.607 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.607 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.607 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.607 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.607 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.607 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.607 "name": "raid_bdev1", 00:16:11.607 "uuid": "aa961aa2-d1e1-41d3-b692-bfdc1a4307a0", 00:16:11.607 "strip_size_kb": 64, 00:16:11.607 "state": "online", 00:16:11.607 "raid_level": "raid5f", 00:16:11.607 "superblock": true, 00:16:11.607 "num_base_bdevs": 4, 00:16:11.607 "num_base_bdevs_discovered": 3, 00:16:11.607 "num_base_bdevs_operational": 3, 00:16:11.607 "base_bdevs_list": [ 00:16:11.607 { 00:16:11.607 "name": null, 00:16:11.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.607 "is_configured": false, 00:16:11.607 "data_offset": 0, 00:16:11.607 "data_size": 63488 00:16:11.607 }, 00:16:11.607 { 00:16:11.607 "name": "BaseBdev2", 00:16:11.607 "uuid": "bcf36f98-1ce1-5e56-a148-7c6f91e9d728", 00:16:11.607 "is_configured": true, 00:16:11.607 "data_offset": 2048, 00:16:11.607 "data_size": 63488 00:16:11.607 }, 00:16:11.607 { 00:16:11.607 "name": "BaseBdev3", 00:16:11.607 "uuid": "7db79f0c-54cb-5b2b-b2d0-f4470888121f", 00:16:11.607 "is_configured": true, 00:16:11.607 "data_offset": 2048, 00:16:11.607 "data_size": 63488 00:16:11.607 }, 00:16:11.607 { 00:16:11.607 "name": "BaseBdev4", 00:16:11.607 "uuid": "f721faf8-287d-51fb-8b81-ffb17a57fd12", 00:16:11.607 "is_configured": true, 00:16:11.607 "data_offset": 2048, 00:16:11.607 "data_size": 63488 00:16:11.607 } 00:16:11.607 ] 00:16:11.607 }' 00:16:11.607 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.607 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.175 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:12.175 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.175 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:12.175 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:12.175 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.175 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.175 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.175 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.175 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.175 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.175 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.175 "name": "raid_bdev1", 00:16:12.175 "uuid": "aa961aa2-d1e1-41d3-b692-bfdc1a4307a0", 00:16:12.175 "strip_size_kb": 64, 00:16:12.175 "state": "online", 00:16:12.175 "raid_level": "raid5f", 00:16:12.175 "superblock": true, 00:16:12.175 "num_base_bdevs": 4, 00:16:12.175 "num_base_bdevs_discovered": 3, 00:16:12.175 "num_base_bdevs_operational": 3, 00:16:12.175 "base_bdevs_list": [ 00:16:12.175 { 00:16:12.175 "name": null, 00:16:12.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.175 "is_configured": false, 00:16:12.175 "data_offset": 0, 00:16:12.175 "data_size": 63488 00:16:12.175 }, 00:16:12.175 { 00:16:12.175 "name": "BaseBdev2", 00:16:12.175 "uuid": "bcf36f98-1ce1-5e56-a148-7c6f91e9d728", 00:16:12.175 "is_configured": true, 00:16:12.175 "data_offset": 2048, 00:16:12.175 "data_size": 63488 00:16:12.175 }, 00:16:12.175 { 00:16:12.175 "name": "BaseBdev3", 00:16:12.175 "uuid": "7db79f0c-54cb-5b2b-b2d0-f4470888121f", 00:16:12.175 "is_configured": true, 00:16:12.175 "data_offset": 2048, 00:16:12.175 "data_size": 63488 00:16:12.175 }, 00:16:12.175 { 00:16:12.175 "name": "BaseBdev4", 00:16:12.175 "uuid": "f721faf8-287d-51fb-8b81-ffb17a57fd12", 00:16:12.175 "is_configured": true, 00:16:12.175 "data_offset": 2048, 00:16:12.175 "data_size": 63488 00:16:12.175 } 00:16:12.175 ] 00:16:12.175 }' 00:16:12.175 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.175 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:12.175 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.175 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:12.175 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:12.175 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.175 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.175 [2024-11-21 03:24:59.622045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:12.175 [2024-11-21 03:24:59.625866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002af30 00:16:12.175 [2024-11-21 03:24:59.628067] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:12.175 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.175 03:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:13.115 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.115 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.115 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.115 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.115 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.115 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.116 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.116 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.116 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.116 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.376 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.376 "name": "raid_bdev1", 00:16:13.376 "uuid": "aa961aa2-d1e1-41d3-b692-bfdc1a4307a0", 00:16:13.376 "strip_size_kb": 64, 00:16:13.376 "state": "online", 00:16:13.376 "raid_level": "raid5f", 00:16:13.376 "superblock": true, 00:16:13.376 "num_base_bdevs": 4, 00:16:13.376 "num_base_bdevs_discovered": 4, 00:16:13.376 "num_base_bdevs_operational": 4, 00:16:13.376 "process": { 00:16:13.376 "type": "rebuild", 00:16:13.376 "target": "spare", 00:16:13.376 "progress": { 00:16:13.376 "blocks": 19200, 00:16:13.376 "percent": 10 00:16:13.376 } 00:16:13.376 }, 00:16:13.376 "base_bdevs_list": [ 00:16:13.376 { 00:16:13.376 "name": "spare", 00:16:13.376 "uuid": "170963cd-a2bc-5c82-8ef5-cbae7b9ba0c3", 00:16:13.376 "is_configured": true, 00:16:13.376 "data_offset": 2048, 00:16:13.376 "data_size": 63488 00:16:13.376 }, 00:16:13.376 { 00:16:13.376 "name": "BaseBdev2", 00:16:13.376 "uuid": "bcf36f98-1ce1-5e56-a148-7c6f91e9d728", 00:16:13.376 "is_configured": true, 00:16:13.376 "data_offset": 2048, 00:16:13.376 "data_size": 63488 00:16:13.376 }, 00:16:13.376 { 00:16:13.376 "name": "BaseBdev3", 00:16:13.376 "uuid": "7db79f0c-54cb-5b2b-b2d0-f4470888121f", 00:16:13.376 "is_configured": true, 00:16:13.376 "data_offset": 2048, 00:16:13.376 "data_size": 63488 00:16:13.376 }, 00:16:13.376 { 00:16:13.376 "name": "BaseBdev4", 00:16:13.376 "uuid": "f721faf8-287d-51fb-8b81-ffb17a57fd12", 00:16:13.376 "is_configured": true, 00:16:13.376 "data_offset": 2048, 00:16:13.376 "data_size": 63488 00:16:13.376 } 00:16:13.376 ] 00:16:13.376 }' 00:16:13.376 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.376 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.376 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.376 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.376 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:13.376 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:13.376 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:13.376 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:13.376 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:13.376 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=535 00:16:13.376 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:13.376 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.376 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.376 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.376 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.376 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.376 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.376 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.376 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.376 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.376 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.376 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.376 "name": "raid_bdev1", 00:16:13.376 "uuid": "aa961aa2-d1e1-41d3-b692-bfdc1a4307a0", 00:16:13.376 "strip_size_kb": 64, 00:16:13.376 "state": "online", 00:16:13.376 "raid_level": "raid5f", 00:16:13.376 "superblock": true, 00:16:13.376 "num_base_bdevs": 4, 00:16:13.376 "num_base_bdevs_discovered": 4, 00:16:13.376 "num_base_bdevs_operational": 4, 00:16:13.376 "process": { 00:16:13.376 "type": "rebuild", 00:16:13.376 "target": "spare", 00:16:13.377 "progress": { 00:16:13.377 "blocks": 21120, 00:16:13.377 "percent": 11 00:16:13.377 } 00:16:13.377 }, 00:16:13.377 "base_bdevs_list": [ 00:16:13.377 { 00:16:13.377 "name": "spare", 00:16:13.377 "uuid": "170963cd-a2bc-5c82-8ef5-cbae7b9ba0c3", 00:16:13.377 "is_configured": true, 00:16:13.377 "data_offset": 2048, 00:16:13.377 "data_size": 63488 00:16:13.377 }, 00:16:13.377 { 00:16:13.377 "name": "BaseBdev2", 00:16:13.377 "uuid": "bcf36f98-1ce1-5e56-a148-7c6f91e9d728", 00:16:13.377 "is_configured": true, 00:16:13.377 "data_offset": 2048, 00:16:13.377 "data_size": 63488 00:16:13.377 }, 00:16:13.377 { 00:16:13.377 "name": "BaseBdev3", 00:16:13.377 "uuid": "7db79f0c-54cb-5b2b-b2d0-f4470888121f", 00:16:13.377 "is_configured": true, 00:16:13.377 "data_offset": 2048, 00:16:13.377 "data_size": 63488 00:16:13.377 }, 00:16:13.377 { 00:16:13.377 "name": "BaseBdev4", 00:16:13.377 "uuid": "f721faf8-287d-51fb-8b81-ffb17a57fd12", 00:16:13.377 "is_configured": true, 00:16:13.377 "data_offset": 2048, 00:16:13.377 "data_size": 63488 00:16:13.377 } 00:16:13.377 ] 00:16:13.377 }' 00:16:13.377 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.377 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.377 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.377 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.377 03:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:14.759 03:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:14.759 03:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.759 03:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.759 03:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.759 03:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.759 03:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.759 03:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.759 03:25:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.759 03:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.759 03:25:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.759 03:25:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.759 03:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.759 "name": "raid_bdev1", 00:16:14.759 "uuid": "aa961aa2-d1e1-41d3-b692-bfdc1a4307a0", 00:16:14.759 "strip_size_kb": 64, 00:16:14.759 "state": "online", 00:16:14.759 "raid_level": "raid5f", 00:16:14.759 "superblock": true, 00:16:14.759 "num_base_bdevs": 4, 00:16:14.759 "num_base_bdevs_discovered": 4, 00:16:14.759 "num_base_bdevs_operational": 4, 00:16:14.759 "process": { 00:16:14.759 "type": "rebuild", 00:16:14.759 "target": "spare", 00:16:14.759 "progress": { 00:16:14.759 "blocks": 44160, 00:16:14.759 "percent": 23 00:16:14.759 } 00:16:14.759 }, 00:16:14.759 "base_bdevs_list": [ 00:16:14.759 { 00:16:14.759 "name": "spare", 00:16:14.759 "uuid": "170963cd-a2bc-5c82-8ef5-cbae7b9ba0c3", 00:16:14.759 "is_configured": true, 00:16:14.759 "data_offset": 2048, 00:16:14.759 "data_size": 63488 00:16:14.759 }, 00:16:14.759 { 00:16:14.759 "name": "BaseBdev2", 00:16:14.759 "uuid": "bcf36f98-1ce1-5e56-a148-7c6f91e9d728", 00:16:14.759 "is_configured": true, 00:16:14.759 "data_offset": 2048, 00:16:14.759 "data_size": 63488 00:16:14.759 }, 00:16:14.759 { 00:16:14.759 "name": "BaseBdev3", 00:16:14.759 "uuid": "7db79f0c-54cb-5b2b-b2d0-f4470888121f", 00:16:14.759 "is_configured": true, 00:16:14.759 "data_offset": 2048, 00:16:14.759 "data_size": 63488 00:16:14.759 }, 00:16:14.759 { 00:16:14.759 "name": "BaseBdev4", 00:16:14.759 "uuid": "f721faf8-287d-51fb-8b81-ffb17a57fd12", 00:16:14.759 "is_configured": true, 00:16:14.759 "data_offset": 2048, 00:16:14.759 "data_size": 63488 00:16:14.759 } 00:16:14.759 ] 00:16:14.759 }' 00:16:14.759 03:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.759 03:25:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.759 03:25:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.759 03:25:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.759 03:25:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:15.699 03:25:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:15.699 03:25:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.699 03:25:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.699 03:25:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.699 03:25:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.699 03:25:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.699 03:25:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.699 03:25:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.699 03:25:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.699 03:25:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.699 03:25:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.699 03:25:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.699 "name": "raid_bdev1", 00:16:15.699 "uuid": "aa961aa2-d1e1-41d3-b692-bfdc1a4307a0", 00:16:15.699 "strip_size_kb": 64, 00:16:15.699 "state": "online", 00:16:15.699 "raid_level": "raid5f", 00:16:15.699 "superblock": true, 00:16:15.699 "num_base_bdevs": 4, 00:16:15.699 "num_base_bdevs_discovered": 4, 00:16:15.699 "num_base_bdevs_operational": 4, 00:16:15.699 "process": { 00:16:15.699 "type": "rebuild", 00:16:15.699 "target": "spare", 00:16:15.699 "progress": { 00:16:15.699 "blocks": 65280, 00:16:15.699 "percent": 34 00:16:15.699 } 00:16:15.699 }, 00:16:15.699 "base_bdevs_list": [ 00:16:15.699 { 00:16:15.699 "name": "spare", 00:16:15.699 "uuid": "170963cd-a2bc-5c82-8ef5-cbae7b9ba0c3", 00:16:15.699 "is_configured": true, 00:16:15.699 "data_offset": 2048, 00:16:15.699 "data_size": 63488 00:16:15.699 }, 00:16:15.699 { 00:16:15.699 "name": "BaseBdev2", 00:16:15.699 "uuid": "bcf36f98-1ce1-5e56-a148-7c6f91e9d728", 00:16:15.699 "is_configured": true, 00:16:15.699 "data_offset": 2048, 00:16:15.699 "data_size": 63488 00:16:15.699 }, 00:16:15.699 { 00:16:15.699 "name": "BaseBdev3", 00:16:15.699 "uuid": "7db79f0c-54cb-5b2b-b2d0-f4470888121f", 00:16:15.699 "is_configured": true, 00:16:15.699 "data_offset": 2048, 00:16:15.699 "data_size": 63488 00:16:15.699 }, 00:16:15.699 { 00:16:15.699 "name": "BaseBdev4", 00:16:15.699 "uuid": "f721faf8-287d-51fb-8b81-ffb17a57fd12", 00:16:15.699 "is_configured": true, 00:16:15.699 "data_offset": 2048, 00:16:15.699 "data_size": 63488 00:16:15.699 } 00:16:15.699 ] 00:16:15.699 }' 00:16:15.699 03:25:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.699 03:25:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:15.699 03:25:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.699 03:25:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.699 03:25:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:17.080 03:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:17.080 03:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:17.080 03:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.080 03:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:17.080 03:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:17.080 03:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.080 03:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.080 03:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.080 03:25:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.080 03:25:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.080 03:25:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.080 03:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.081 "name": "raid_bdev1", 00:16:17.081 "uuid": "aa961aa2-d1e1-41d3-b692-bfdc1a4307a0", 00:16:17.081 "strip_size_kb": 64, 00:16:17.081 "state": "online", 00:16:17.081 "raid_level": "raid5f", 00:16:17.081 "superblock": true, 00:16:17.081 "num_base_bdevs": 4, 00:16:17.081 "num_base_bdevs_discovered": 4, 00:16:17.081 "num_base_bdevs_operational": 4, 00:16:17.081 "process": { 00:16:17.081 "type": "rebuild", 00:16:17.081 "target": "spare", 00:16:17.081 "progress": { 00:16:17.081 "blocks": 86400, 00:16:17.081 "percent": 45 00:16:17.081 } 00:16:17.081 }, 00:16:17.081 "base_bdevs_list": [ 00:16:17.081 { 00:16:17.081 "name": "spare", 00:16:17.081 "uuid": "170963cd-a2bc-5c82-8ef5-cbae7b9ba0c3", 00:16:17.081 "is_configured": true, 00:16:17.081 "data_offset": 2048, 00:16:17.081 "data_size": 63488 00:16:17.081 }, 00:16:17.081 { 00:16:17.081 "name": "BaseBdev2", 00:16:17.081 "uuid": "bcf36f98-1ce1-5e56-a148-7c6f91e9d728", 00:16:17.081 "is_configured": true, 00:16:17.081 "data_offset": 2048, 00:16:17.081 "data_size": 63488 00:16:17.081 }, 00:16:17.081 { 00:16:17.081 "name": "BaseBdev3", 00:16:17.081 "uuid": "7db79f0c-54cb-5b2b-b2d0-f4470888121f", 00:16:17.081 "is_configured": true, 00:16:17.081 "data_offset": 2048, 00:16:17.081 "data_size": 63488 00:16:17.081 }, 00:16:17.081 { 00:16:17.081 "name": "BaseBdev4", 00:16:17.081 "uuid": "f721faf8-287d-51fb-8b81-ffb17a57fd12", 00:16:17.081 "is_configured": true, 00:16:17.081 "data_offset": 2048, 00:16:17.081 "data_size": 63488 00:16:17.081 } 00:16:17.081 ] 00:16:17.081 }' 00:16:17.081 03:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.081 03:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:17.081 03:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.081 03:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:17.081 03:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:18.020 03:25:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:18.020 03:25:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:18.020 03:25:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.020 03:25:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:18.020 03:25:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:18.020 03:25:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.020 03:25:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.020 03:25:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.020 03:25:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.020 03:25:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.020 03:25:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.020 03:25:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.020 "name": "raid_bdev1", 00:16:18.020 "uuid": "aa961aa2-d1e1-41d3-b692-bfdc1a4307a0", 00:16:18.020 "strip_size_kb": 64, 00:16:18.020 "state": "online", 00:16:18.020 "raid_level": "raid5f", 00:16:18.020 "superblock": true, 00:16:18.020 "num_base_bdevs": 4, 00:16:18.020 "num_base_bdevs_discovered": 4, 00:16:18.020 "num_base_bdevs_operational": 4, 00:16:18.020 "process": { 00:16:18.020 "type": "rebuild", 00:16:18.020 "target": "spare", 00:16:18.020 "progress": { 00:16:18.020 "blocks": 109440, 00:16:18.020 "percent": 57 00:16:18.020 } 00:16:18.020 }, 00:16:18.020 "base_bdevs_list": [ 00:16:18.020 { 00:16:18.020 "name": "spare", 00:16:18.020 "uuid": "170963cd-a2bc-5c82-8ef5-cbae7b9ba0c3", 00:16:18.020 "is_configured": true, 00:16:18.020 "data_offset": 2048, 00:16:18.020 "data_size": 63488 00:16:18.020 }, 00:16:18.020 { 00:16:18.020 "name": "BaseBdev2", 00:16:18.020 "uuid": "bcf36f98-1ce1-5e56-a148-7c6f91e9d728", 00:16:18.020 "is_configured": true, 00:16:18.020 "data_offset": 2048, 00:16:18.020 "data_size": 63488 00:16:18.020 }, 00:16:18.020 { 00:16:18.020 "name": "BaseBdev3", 00:16:18.020 "uuid": "7db79f0c-54cb-5b2b-b2d0-f4470888121f", 00:16:18.020 "is_configured": true, 00:16:18.020 "data_offset": 2048, 00:16:18.020 "data_size": 63488 00:16:18.020 }, 00:16:18.020 { 00:16:18.020 "name": "BaseBdev4", 00:16:18.020 "uuid": "f721faf8-287d-51fb-8b81-ffb17a57fd12", 00:16:18.020 "is_configured": true, 00:16:18.020 "data_offset": 2048, 00:16:18.020 "data_size": 63488 00:16:18.020 } 00:16:18.020 ] 00:16:18.020 }' 00:16:18.020 03:25:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.020 03:25:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:18.020 03:25:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.020 03:25:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:18.020 03:25:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:19.401 03:25:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:19.401 03:25:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:19.401 03:25:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.401 03:25:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:19.401 03:25:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:19.401 03:25:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.401 03:25:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.402 03:25:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.402 03:25:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.402 03:25:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.402 03:25:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.402 03:25:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.402 "name": "raid_bdev1", 00:16:19.402 "uuid": "aa961aa2-d1e1-41d3-b692-bfdc1a4307a0", 00:16:19.402 "strip_size_kb": 64, 00:16:19.402 "state": "online", 00:16:19.402 "raid_level": "raid5f", 00:16:19.402 "superblock": true, 00:16:19.402 "num_base_bdevs": 4, 00:16:19.402 "num_base_bdevs_discovered": 4, 00:16:19.402 "num_base_bdevs_operational": 4, 00:16:19.402 "process": { 00:16:19.402 "type": "rebuild", 00:16:19.402 "target": "spare", 00:16:19.402 "progress": { 00:16:19.402 "blocks": 130560, 00:16:19.402 "percent": 68 00:16:19.402 } 00:16:19.402 }, 00:16:19.402 "base_bdevs_list": [ 00:16:19.402 { 00:16:19.402 "name": "spare", 00:16:19.402 "uuid": "170963cd-a2bc-5c82-8ef5-cbae7b9ba0c3", 00:16:19.402 "is_configured": true, 00:16:19.402 "data_offset": 2048, 00:16:19.402 "data_size": 63488 00:16:19.402 }, 00:16:19.402 { 00:16:19.402 "name": "BaseBdev2", 00:16:19.402 "uuid": "bcf36f98-1ce1-5e56-a148-7c6f91e9d728", 00:16:19.402 "is_configured": true, 00:16:19.402 "data_offset": 2048, 00:16:19.402 "data_size": 63488 00:16:19.402 }, 00:16:19.402 { 00:16:19.402 "name": "BaseBdev3", 00:16:19.402 "uuid": "7db79f0c-54cb-5b2b-b2d0-f4470888121f", 00:16:19.402 "is_configured": true, 00:16:19.402 "data_offset": 2048, 00:16:19.402 "data_size": 63488 00:16:19.402 }, 00:16:19.402 { 00:16:19.402 "name": "BaseBdev4", 00:16:19.402 "uuid": "f721faf8-287d-51fb-8b81-ffb17a57fd12", 00:16:19.402 "is_configured": true, 00:16:19.402 "data_offset": 2048, 00:16:19.402 "data_size": 63488 00:16:19.402 } 00:16:19.402 ] 00:16:19.402 }' 00:16:19.402 03:25:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.402 03:25:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:19.402 03:25:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.402 03:25:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:19.402 03:25:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:20.342 03:25:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:20.342 03:25:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:20.342 03:25:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.342 03:25:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:20.342 03:25:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:20.342 03:25:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.342 03:25:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.342 03:25:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.342 03:25:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.342 03:25:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.342 03:25:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.342 03:25:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.342 "name": "raid_bdev1", 00:16:20.342 "uuid": "aa961aa2-d1e1-41d3-b692-bfdc1a4307a0", 00:16:20.342 "strip_size_kb": 64, 00:16:20.342 "state": "online", 00:16:20.342 "raid_level": "raid5f", 00:16:20.342 "superblock": true, 00:16:20.342 "num_base_bdevs": 4, 00:16:20.342 "num_base_bdevs_discovered": 4, 00:16:20.342 "num_base_bdevs_operational": 4, 00:16:20.342 "process": { 00:16:20.342 "type": "rebuild", 00:16:20.342 "target": "spare", 00:16:20.342 "progress": { 00:16:20.342 "blocks": 153600, 00:16:20.342 "percent": 80 00:16:20.342 } 00:16:20.342 }, 00:16:20.342 "base_bdevs_list": [ 00:16:20.342 { 00:16:20.342 "name": "spare", 00:16:20.342 "uuid": "170963cd-a2bc-5c82-8ef5-cbae7b9ba0c3", 00:16:20.342 "is_configured": true, 00:16:20.342 "data_offset": 2048, 00:16:20.342 "data_size": 63488 00:16:20.342 }, 00:16:20.342 { 00:16:20.342 "name": "BaseBdev2", 00:16:20.342 "uuid": "bcf36f98-1ce1-5e56-a148-7c6f91e9d728", 00:16:20.342 "is_configured": true, 00:16:20.342 "data_offset": 2048, 00:16:20.342 "data_size": 63488 00:16:20.342 }, 00:16:20.342 { 00:16:20.342 "name": "BaseBdev3", 00:16:20.342 "uuid": "7db79f0c-54cb-5b2b-b2d0-f4470888121f", 00:16:20.342 "is_configured": true, 00:16:20.342 "data_offset": 2048, 00:16:20.342 "data_size": 63488 00:16:20.342 }, 00:16:20.342 { 00:16:20.342 "name": "BaseBdev4", 00:16:20.342 "uuid": "f721faf8-287d-51fb-8b81-ffb17a57fd12", 00:16:20.342 "is_configured": true, 00:16:20.342 "data_offset": 2048, 00:16:20.342 "data_size": 63488 00:16:20.342 } 00:16:20.342 ] 00:16:20.342 }' 00:16:20.342 03:25:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.342 03:25:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:20.342 03:25:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.342 03:25:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:20.342 03:25:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:21.313 03:25:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:21.313 03:25:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:21.313 03:25:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.313 03:25:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:21.313 03:25:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:21.313 03:25:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.313 03:25:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.313 03:25:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.313 03:25:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.313 03:25:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.573 03:25:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.573 03:25:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.573 "name": "raid_bdev1", 00:16:21.573 "uuid": "aa961aa2-d1e1-41d3-b692-bfdc1a4307a0", 00:16:21.573 "strip_size_kb": 64, 00:16:21.573 "state": "online", 00:16:21.573 "raid_level": "raid5f", 00:16:21.573 "superblock": true, 00:16:21.573 "num_base_bdevs": 4, 00:16:21.573 "num_base_bdevs_discovered": 4, 00:16:21.573 "num_base_bdevs_operational": 4, 00:16:21.573 "process": { 00:16:21.573 "type": "rebuild", 00:16:21.573 "target": "spare", 00:16:21.573 "progress": { 00:16:21.573 "blocks": 174720, 00:16:21.573 "percent": 91 00:16:21.573 } 00:16:21.573 }, 00:16:21.573 "base_bdevs_list": [ 00:16:21.573 { 00:16:21.573 "name": "spare", 00:16:21.573 "uuid": "170963cd-a2bc-5c82-8ef5-cbae7b9ba0c3", 00:16:21.573 "is_configured": true, 00:16:21.573 "data_offset": 2048, 00:16:21.573 "data_size": 63488 00:16:21.573 }, 00:16:21.573 { 00:16:21.573 "name": "BaseBdev2", 00:16:21.573 "uuid": "bcf36f98-1ce1-5e56-a148-7c6f91e9d728", 00:16:21.573 "is_configured": true, 00:16:21.573 "data_offset": 2048, 00:16:21.573 "data_size": 63488 00:16:21.573 }, 00:16:21.573 { 00:16:21.573 "name": "BaseBdev3", 00:16:21.573 "uuid": "7db79f0c-54cb-5b2b-b2d0-f4470888121f", 00:16:21.573 "is_configured": true, 00:16:21.573 "data_offset": 2048, 00:16:21.573 "data_size": 63488 00:16:21.573 }, 00:16:21.573 { 00:16:21.573 "name": "BaseBdev4", 00:16:21.573 "uuid": "f721faf8-287d-51fb-8b81-ffb17a57fd12", 00:16:21.573 "is_configured": true, 00:16:21.573 "data_offset": 2048, 00:16:21.573 "data_size": 63488 00:16:21.573 } 00:16:21.573 ] 00:16:21.573 }' 00:16:21.573 03:25:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.573 03:25:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:21.573 03:25:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.573 03:25:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:21.573 03:25:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:22.157 [2024-11-21 03:25:09.685106] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:22.157 [2024-11-21 03:25:09.685215] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:22.157 [2024-11-21 03:25:09.685377] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.726 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:22.726 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:22.726 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.726 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:22.726 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:22.726 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.726 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.726 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.726 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.726 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.726 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.726 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.726 "name": "raid_bdev1", 00:16:22.726 "uuid": "aa961aa2-d1e1-41d3-b692-bfdc1a4307a0", 00:16:22.726 "strip_size_kb": 64, 00:16:22.726 "state": "online", 00:16:22.726 "raid_level": "raid5f", 00:16:22.726 "superblock": true, 00:16:22.726 "num_base_bdevs": 4, 00:16:22.726 "num_base_bdevs_discovered": 4, 00:16:22.726 "num_base_bdevs_operational": 4, 00:16:22.726 "base_bdevs_list": [ 00:16:22.726 { 00:16:22.727 "name": "spare", 00:16:22.727 "uuid": "170963cd-a2bc-5c82-8ef5-cbae7b9ba0c3", 00:16:22.727 "is_configured": true, 00:16:22.727 "data_offset": 2048, 00:16:22.727 "data_size": 63488 00:16:22.727 }, 00:16:22.727 { 00:16:22.727 "name": "BaseBdev2", 00:16:22.727 "uuid": "bcf36f98-1ce1-5e56-a148-7c6f91e9d728", 00:16:22.727 "is_configured": true, 00:16:22.727 "data_offset": 2048, 00:16:22.727 "data_size": 63488 00:16:22.727 }, 00:16:22.727 { 00:16:22.727 "name": "BaseBdev3", 00:16:22.727 "uuid": "7db79f0c-54cb-5b2b-b2d0-f4470888121f", 00:16:22.727 "is_configured": true, 00:16:22.727 "data_offset": 2048, 00:16:22.727 "data_size": 63488 00:16:22.727 }, 00:16:22.727 { 00:16:22.727 "name": "BaseBdev4", 00:16:22.727 "uuid": "f721faf8-287d-51fb-8b81-ffb17a57fd12", 00:16:22.727 "is_configured": true, 00:16:22.727 "data_offset": 2048, 00:16:22.727 "data_size": 63488 00:16:22.727 } 00:16:22.727 ] 00:16:22.727 }' 00:16:22.727 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.727 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:22.727 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.727 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:22.727 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:22.727 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:22.727 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.727 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:22.727 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:22.727 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.727 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.727 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.727 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.727 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.727 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.727 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.727 "name": "raid_bdev1", 00:16:22.727 "uuid": "aa961aa2-d1e1-41d3-b692-bfdc1a4307a0", 00:16:22.727 "strip_size_kb": 64, 00:16:22.727 "state": "online", 00:16:22.727 "raid_level": "raid5f", 00:16:22.727 "superblock": true, 00:16:22.727 "num_base_bdevs": 4, 00:16:22.727 "num_base_bdevs_discovered": 4, 00:16:22.727 "num_base_bdevs_operational": 4, 00:16:22.727 "base_bdevs_list": [ 00:16:22.727 { 00:16:22.727 "name": "spare", 00:16:22.727 "uuid": "170963cd-a2bc-5c82-8ef5-cbae7b9ba0c3", 00:16:22.727 "is_configured": true, 00:16:22.727 "data_offset": 2048, 00:16:22.727 "data_size": 63488 00:16:22.727 }, 00:16:22.727 { 00:16:22.727 "name": "BaseBdev2", 00:16:22.727 "uuid": "bcf36f98-1ce1-5e56-a148-7c6f91e9d728", 00:16:22.727 "is_configured": true, 00:16:22.727 "data_offset": 2048, 00:16:22.727 "data_size": 63488 00:16:22.727 }, 00:16:22.727 { 00:16:22.727 "name": "BaseBdev3", 00:16:22.727 "uuid": "7db79f0c-54cb-5b2b-b2d0-f4470888121f", 00:16:22.727 "is_configured": true, 00:16:22.727 "data_offset": 2048, 00:16:22.727 "data_size": 63488 00:16:22.727 }, 00:16:22.727 { 00:16:22.727 "name": "BaseBdev4", 00:16:22.727 "uuid": "f721faf8-287d-51fb-8b81-ffb17a57fd12", 00:16:22.727 "is_configured": true, 00:16:22.727 "data_offset": 2048, 00:16:22.727 "data_size": 63488 00:16:22.727 } 00:16:22.727 ] 00:16:22.727 }' 00:16:22.727 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.727 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:22.727 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.987 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:22.987 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:22.987 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.987 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.987 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.987 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.987 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:22.987 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.987 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.987 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.987 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.987 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.987 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.987 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.987 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.987 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.987 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.987 "name": "raid_bdev1", 00:16:22.987 "uuid": "aa961aa2-d1e1-41d3-b692-bfdc1a4307a0", 00:16:22.987 "strip_size_kb": 64, 00:16:22.987 "state": "online", 00:16:22.987 "raid_level": "raid5f", 00:16:22.987 "superblock": true, 00:16:22.987 "num_base_bdevs": 4, 00:16:22.987 "num_base_bdevs_discovered": 4, 00:16:22.987 "num_base_bdevs_operational": 4, 00:16:22.987 "base_bdevs_list": [ 00:16:22.987 { 00:16:22.987 "name": "spare", 00:16:22.987 "uuid": "170963cd-a2bc-5c82-8ef5-cbae7b9ba0c3", 00:16:22.987 "is_configured": true, 00:16:22.987 "data_offset": 2048, 00:16:22.987 "data_size": 63488 00:16:22.987 }, 00:16:22.987 { 00:16:22.987 "name": "BaseBdev2", 00:16:22.987 "uuid": "bcf36f98-1ce1-5e56-a148-7c6f91e9d728", 00:16:22.987 "is_configured": true, 00:16:22.987 "data_offset": 2048, 00:16:22.987 "data_size": 63488 00:16:22.987 }, 00:16:22.987 { 00:16:22.987 "name": "BaseBdev3", 00:16:22.987 "uuid": "7db79f0c-54cb-5b2b-b2d0-f4470888121f", 00:16:22.987 "is_configured": true, 00:16:22.987 "data_offset": 2048, 00:16:22.987 "data_size": 63488 00:16:22.987 }, 00:16:22.987 { 00:16:22.987 "name": "BaseBdev4", 00:16:22.987 "uuid": "f721faf8-287d-51fb-8b81-ffb17a57fd12", 00:16:22.987 "is_configured": true, 00:16:22.987 "data_offset": 2048, 00:16:22.987 "data_size": 63488 00:16:22.987 } 00:16:22.987 ] 00:16:22.987 }' 00:16:22.987 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.987 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.247 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:23.247 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.247 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.247 [2024-11-21 03:25:10.783160] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:23.247 [2024-11-21 03:25:10.783242] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:23.247 [2024-11-21 03:25:10.783326] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:23.247 [2024-11-21 03:25:10.783428] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:23.247 [2024-11-21 03:25:10.783441] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:23.247 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.247 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.247 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.247 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:23.247 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.247 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.507 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:23.507 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:23.507 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:23.508 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:23.508 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:23.508 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:23.508 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:23.508 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:23.508 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:23.508 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:23.508 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:23.508 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:23.508 03:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:23.508 /dev/nbd0 00:16:23.508 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:23.508 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:23.508 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:23.508 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:23.508 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:23.508 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:23.508 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:23.768 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:23.768 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:23.768 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:23.768 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:23.768 1+0 records in 00:16:23.768 1+0 records out 00:16:23.768 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252913 s, 16.2 MB/s 00:16:23.768 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:23.768 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:23.768 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:23.768 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:23.768 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:23.768 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:23.768 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:23.768 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:23.768 /dev/nbd1 00:16:23.768 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:23.768 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:23.768 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:23.768 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:23.768 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:23.768 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:23.768 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:23.768 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:23.768 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:23.768 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:23.768 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:23.768 1+0 records in 00:16:23.768 1+0 records out 00:16:23.768 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00046223 s, 8.9 MB/s 00:16:24.029 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:24.029 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:24.029 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:24.029 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:24.029 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:24.029 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:24.029 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:24.029 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:24.029 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:24.029 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:24.029 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:24.029 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:24.029 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:24.029 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:24.029 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:24.289 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:24.289 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:24.289 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:24.290 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:24.290 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:24.290 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:24.290 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:24.290 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:24.290 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:24.290 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:24.290 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:24.290 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:24.290 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:24.290 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:24.290 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:24.290 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:24.290 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:24.290 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:24.290 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:24.290 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:24.290 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.290 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.290 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.290 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:24.290 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.290 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.290 [2024-11-21 03:25:11.834271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:24.290 [2024-11-21 03:25:11.834382] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.290 [2024-11-21 03:25:11.834414] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:24.290 [2024-11-21 03:25:11.834426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.290 [2024-11-21 03:25:11.836900] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.290 [2024-11-21 03:25:11.836946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:24.290 [2024-11-21 03:25:11.837058] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:24.290 [2024-11-21 03:25:11.837111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:24.290 [2024-11-21 03:25:11.837250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:24.290 [2024-11-21 03:25:11.837346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:24.290 [2024-11-21 03:25:11.837449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:24.290 spare 00:16:24.290 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.290 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:24.290 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.290 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.550 [2024-11-21 03:25:11.937533] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:24.550 [2024-11-21 03:25:11.937565] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:24.550 [2024-11-21 03:25:11.937831] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000495e0 00:16:24.550 [2024-11-21 03:25:11.938332] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:24.550 [2024-11-21 03:25:11.938345] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:24.550 [2024-11-21 03:25:11.938502] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.550 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.550 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:24.550 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:24.550 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.550 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.550 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.550 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:24.550 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.550 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.550 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.550 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.550 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.550 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.550 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.550 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.550 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.550 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.550 "name": "raid_bdev1", 00:16:24.550 "uuid": "aa961aa2-d1e1-41d3-b692-bfdc1a4307a0", 00:16:24.550 "strip_size_kb": 64, 00:16:24.550 "state": "online", 00:16:24.550 "raid_level": "raid5f", 00:16:24.550 "superblock": true, 00:16:24.550 "num_base_bdevs": 4, 00:16:24.550 "num_base_bdevs_discovered": 4, 00:16:24.550 "num_base_bdevs_operational": 4, 00:16:24.550 "base_bdevs_list": [ 00:16:24.550 { 00:16:24.550 "name": "spare", 00:16:24.550 "uuid": "170963cd-a2bc-5c82-8ef5-cbae7b9ba0c3", 00:16:24.550 "is_configured": true, 00:16:24.550 "data_offset": 2048, 00:16:24.550 "data_size": 63488 00:16:24.550 }, 00:16:24.550 { 00:16:24.550 "name": "BaseBdev2", 00:16:24.550 "uuid": "bcf36f98-1ce1-5e56-a148-7c6f91e9d728", 00:16:24.550 "is_configured": true, 00:16:24.550 "data_offset": 2048, 00:16:24.550 "data_size": 63488 00:16:24.550 }, 00:16:24.550 { 00:16:24.550 "name": "BaseBdev3", 00:16:24.550 "uuid": "7db79f0c-54cb-5b2b-b2d0-f4470888121f", 00:16:24.550 "is_configured": true, 00:16:24.550 "data_offset": 2048, 00:16:24.550 "data_size": 63488 00:16:24.550 }, 00:16:24.550 { 00:16:24.550 "name": "BaseBdev4", 00:16:24.550 "uuid": "f721faf8-287d-51fb-8b81-ffb17a57fd12", 00:16:24.550 "is_configured": true, 00:16:24.550 "data_offset": 2048, 00:16:24.550 "data_size": 63488 00:16:24.550 } 00:16:24.550 ] 00:16:24.550 }' 00:16:24.550 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.550 03:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.121 "name": "raid_bdev1", 00:16:25.121 "uuid": "aa961aa2-d1e1-41d3-b692-bfdc1a4307a0", 00:16:25.121 "strip_size_kb": 64, 00:16:25.121 "state": "online", 00:16:25.121 "raid_level": "raid5f", 00:16:25.121 "superblock": true, 00:16:25.121 "num_base_bdevs": 4, 00:16:25.121 "num_base_bdevs_discovered": 4, 00:16:25.121 "num_base_bdevs_operational": 4, 00:16:25.121 "base_bdevs_list": [ 00:16:25.121 { 00:16:25.121 "name": "spare", 00:16:25.121 "uuid": "170963cd-a2bc-5c82-8ef5-cbae7b9ba0c3", 00:16:25.121 "is_configured": true, 00:16:25.121 "data_offset": 2048, 00:16:25.121 "data_size": 63488 00:16:25.121 }, 00:16:25.121 { 00:16:25.121 "name": "BaseBdev2", 00:16:25.121 "uuid": "bcf36f98-1ce1-5e56-a148-7c6f91e9d728", 00:16:25.121 "is_configured": true, 00:16:25.121 "data_offset": 2048, 00:16:25.121 "data_size": 63488 00:16:25.121 }, 00:16:25.121 { 00:16:25.121 "name": "BaseBdev3", 00:16:25.121 "uuid": "7db79f0c-54cb-5b2b-b2d0-f4470888121f", 00:16:25.121 "is_configured": true, 00:16:25.121 "data_offset": 2048, 00:16:25.121 "data_size": 63488 00:16:25.121 }, 00:16:25.121 { 00:16:25.121 "name": "BaseBdev4", 00:16:25.121 "uuid": "f721faf8-287d-51fb-8b81-ffb17a57fd12", 00:16:25.121 "is_configured": true, 00:16:25.121 "data_offset": 2048, 00:16:25.121 "data_size": 63488 00:16:25.121 } 00:16:25.121 ] 00:16:25.121 }' 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.121 [2024-11-21 03:25:12.586651] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.121 "name": "raid_bdev1", 00:16:25.121 "uuid": "aa961aa2-d1e1-41d3-b692-bfdc1a4307a0", 00:16:25.121 "strip_size_kb": 64, 00:16:25.121 "state": "online", 00:16:25.121 "raid_level": "raid5f", 00:16:25.121 "superblock": true, 00:16:25.121 "num_base_bdevs": 4, 00:16:25.121 "num_base_bdevs_discovered": 3, 00:16:25.121 "num_base_bdevs_operational": 3, 00:16:25.121 "base_bdevs_list": [ 00:16:25.121 { 00:16:25.121 "name": null, 00:16:25.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.121 "is_configured": false, 00:16:25.121 "data_offset": 0, 00:16:25.121 "data_size": 63488 00:16:25.121 }, 00:16:25.121 { 00:16:25.121 "name": "BaseBdev2", 00:16:25.121 "uuid": "bcf36f98-1ce1-5e56-a148-7c6f91e9d728", 00:16:25.121 "is_configured": true, 00:16:25.121 "data_offset": 2048, 00:16:25.121 "data_size": 63488 00:16:25.121 }, 00:16:25.121 { 00:16:25.121 "name": "BaseBdev3", 00:16:25.121 "uuid": "7db79f0c-54cb-5b2b-b2d0-f4470888121f", 00:16:25.121 "is_configured": true, 00:16:25.121 "data_offset": 2048, 00:16:25.121 "data_size": 63488 00:16:25.121 }, 00:16:25.121 { 00:16:25.121 "name": "BaseBdev4", 00:16:25.121 "uuid": "f721faf8-287d-51fb-8b81-ffb17a57fd12", 00:16:25.121 "is_configured": true, 00:16:25.121 "data_offset": 2048, 00:16:25.121 "data_size": 63488 00:16:25.121 } 00:16:25.121 ] 00:16:25.121 }' 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.121 03:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.692 03:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:25.692 03:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.692 03:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.692 [2024-11-21 03:25:13.038774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:25.692 [2024-11-21 03:25:13.039010] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:25.692 [2024-11-21 03:25:13.039050] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:25.692 [2024-11-21 03:25:13.039091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:25.692 [2024-11-21 03:25:13.046417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000496b0 00:16:25.692 03:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.692 03:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:25.692 [2024-11-21 03:25:13.048966] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:26.633 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:26.633 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.633 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:26.633 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:26.633 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.633 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.633 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.633 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.633 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.633 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.633 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.633 "name": "raid_bdev1", 00:16:26.633 "uuid": "aa961aa2-d1e1-41d3-b692-bfdc1a4307a0", 00:16:26.633 "strip_size_kb": 64, 00:16:26.633 "state": "online", 00:16:26.633 "raid_level": "raid5f", 00:16:26.633 "superblock": true, 00:16:26.633 "num_base_bdevs": 4, 00:16:26.633 "num_base_bdevs_discovered": 4, 00:16:26.633 "num_base_bdevs_operational": 4, 00:16:26.633 "process": { 00:16:26.633 "type": "rebuild", 00:16:26.633 "target": "spare", 00:16:26.633 "progress": { 00:16:26.633 "blocks": 19200, 00:16:26.633 "percent": 10 00:16:26.633 } 00:16:26.633 }, 00:16:26.633 "base_bdevs_list": [ 00:16:26.633 { 00:16:26.633 "name": "spare", 00:16:26.633 "uuid": "170963cd-a2bc-5c82-8ef5-cbae7b9ba0c3", 00:16:26.633 "is_configured": true, 00:16:26.633 "data_offset": 2048, 00:16:26.633 "data_size": 63488 00:16:26.633 }, 00:16:26.633 { 00:16:26.633 "name": "BaseBdev2", 00:16:26.633 "uuid": "bcf36f98-1ce1-5e56-a148-7c6f91e9d728", 00:16:26.633 "is_configured": true, 00:16:26.633 "data_offset": 2048, 00:16:26.633 "data_size": 63488 00:16:26.633 }, 00:16:26.633 { 00:16:26.633 "name": "BaseBdev3", 00:16:26.633 "uuid": "7db79f0c-54cb-5b2b-b2d0-f4470888121f", 00:16:26.633 "is_configured": true, 00:16:26.633 "data_offset": 2048, 00:16:26.633 "data_size": 63488 00:16:26.633 }, 00:16:26.633 { 00:16:26.633 "name": "BaseBdev4", 00:16:26.633 "uuid": "f721faf8-287d-51fb-8b81-ffb17a57fd12", 00:16:26.633 "is_configured": true, 00:16:26.633 "data_offset": 2048, 00:16:26.633 "data_size": 63488 00:16:26.633 } 00:16:26.633 ] 00:16:26.633 }' 00:16:26.633 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.633 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:26.633 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.893 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:26.893 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:26.893 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.893 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.893 [2024-11-21 03:25:14.210542] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:26.893 [2024-11-21 03:25:14.257421] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:26.894 [2024-11-21 03:25:14.257537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.894 [2024-11-21 03:25:14.257558] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:26.894 [2024-11-21 03:25:14.257571] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:26.894 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.894 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:26.894 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.894 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.894 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.894 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.894 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:26.894 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.894 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.894 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.894 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.894 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.894 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.894 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.894 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.894 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.894 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.894 "name": "raid_bdev1", 00:16:26.894 "uuid": "aa961aa2-d1e1-41d3-b692-bfdc1a4307a0", 00:16:26.894 "strip_size_kb": 64, 00:16:26.894 "state": "online", 00:16:26.894 "raid_level": "raid5f", 00:16:26.894 "superblock": true, 00:16:26.894 "num_base_bdevs": 4, 00:16:26.894 "num_base_bdevs_discovered": 3, 00:16:26.894 "num_base_bdevs_operational": 3, 00:16:26.894 "base_bdevs_list": [ 00:16:26.894 { 00:16:26.894 "name": null, 00:16:26.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.894 "is_configured": false, 00:16:26.894 "data_offset": 0, 00:16:26.894 "data_size": 63488 00:16:26.894 }, 00:16:26.894 { 00:16:26.894 "name": "BaseBdev2", 00:16:26.894 "uuid": "bcf36f98-1ce1-5e56-a148-7c6f91e9d728", 00:16:26.894 "is_configured": true, 00:16:26.894 "data_offset": 2048, 00:16:26.894 "data_size": 63488 00:16:26.894 }, 00:16:26.894 { 00:16:26.894 "name": "BaseBdev3", 00:16:26.894 "uuid": "7db79f0c-54cb-5b2b-b2d0-f4470888121f", 00:16:26.894 "is_configured": true, 00:16:26.894 "data_offset": 2048, 00:16:26.894 "data_size": 63488 00:16:26.894 }, 00:16:26.894 { 00:16:26.894 "name": "BaseBdev4", 00:16:26.894 "uuid": "f721faf8-287d-51fb-8b81-ffb17a57fd12", 00:16:26.894 "is_configured": true, 00:16:26.894 "data_offset": 2048, 00:16:26.894 "data_size": 63488 00:16:26.894 } 00:16:26.894 ] 00:16:26.894 }' 00:16:26.894 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.894 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.465 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:27.465 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.465 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.465 [2024-11-21 03:25:14.746518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:27.465 [2024-11-21 03:25:14.746629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.465 [2024-11-21 03:25:14.746677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:16:27.465 [2024-11-21 03:25:14.746715] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.465 [2024-11-21 03:25:14.747243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.465 [2024-11-21 03:25:14.747315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:27.465 [2024-11-21 03:25:14.747438] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:27.465 [2024-11-21 03:25:14.747490] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:27.465 [2024-11-21 03:25:14.747543] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:27.465 [2024-11-21 03:25:14.747633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:27.465 [2024-11-21 03:25:14.752678] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049780 00:16:27.465 spare 00:16:27.465 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.465 03:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:27.465 [2024-11-21 03:25:14.755207] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:28.406 03:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.406 03:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.406 03:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.406 03:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.406 03:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.406 03:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.406 03:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.406 03:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.406 03:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.406 03:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.406 03:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.406 "name": "raid_bdev1", 00:16:28.406 "uuid": "aa961aa2-d1e1-41d3-b692-bfdc1a4307a0", 00:16:28.406 "strip_size_kb": 64, 00:16:28.406 "state": "online", 00:16:28.406 "raid_level": "raid5f", 00:16:28.406 "superblock": true, 00:16:28.406 "num_base_bdevs": 4, 00:16:28.406 "num_base_bdevs_discovered": 4, 00:16:28.406 "num_base_bdevs_operational": 4, 00:16:28.406 "process": { 00:16:28.406 "type": "rebuild", 00:16:28.406 "target": "spare", 00:16:28.406 "progress": { 00:16:28.406 "blocks": 19200, 00:16:28.406 "percent": 10 00:16:28.406 } 00:16:28.406 }, 00:16:28.406 "base_bdevs_list": [ 00:16:28.406 { 00:16:28.406 "name": "spare", 00:16:28.406 "uuid": "170963cd-a2bc-5c82-8ef5-cbae7b9ba0c3", 00:16:28.406 "is_configured": true, 00:16:28.406 "data_offset": 2048, 00:16:28.406 "data_size": 63488 00:16:28.406 }, 00:16:28.406 { 00:16:28.406 "name": "BaseBdev2", 00:16:28.406 "uuid": "bcf36f98-1ce1-5e56-a148-7c6f91e9d728", 00:16:28.406 "is_configured": true, 00:16:28.406 "data_offset": 2048, 00:16:28.406 "data_size": 63488 00:16:28.406 }, 00:16:28.406 { 00:16:28.406 "name": "BaseBdev3", 00:16:28.406 "uuid": "7db79f0c-54cb-5b2b-b2d0-f4470888121f", 00:16:28.406 "is_configured": true, 00:16:28.406 "data_offset": 2048, 00:16:28.406 "data_size": 63488 00:16:28.406 }, 00:16:28.406 { 00:16:28.406 "name": "BaseBdev4", 00:16:28.406 "uuid": "f721faf8-287d-51fb-8b81-ffb17a57fd12", 00:16:28.406 "is_configured": true, 00:16:28.406 "data_offset": 2048, 00:16:28.406 "data_size": 63488 00:16:28.406 } 00:16:28.406 ] 00:16:28.406 }' 00:16:28.406 03:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.406 03:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:28.406 03:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.406 03:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:28.406 03:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:28.406 03:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.406 03:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.406 [2024-11-21 03:25:15.916622] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:28.406 [2024-11-21 03:25:15.963559] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:28.406 [2024-11-21 03:25:15.963669] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:28.406 [2024-11-21 03:25:15.963713] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:28.406 [2024-11-21 03:25:15.963737] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:28.666 03:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.666 03:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:28.666 03:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.666 03:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.666 03:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.666 03:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.666 03:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:28.666 03:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.666 03:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.666 03:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.666 03:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.666 03:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.666 03:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.666 03:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.666 03:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.666 03:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.666 03:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.666 "name": "raid_bdev1", 00:16:28.666 "uuid": "aa961aa2-d1e1-41d3-b692-bfdc1a4307a0", 00:16:28.666 "strip_size_kb": 64, 00:16:28.666 "state": "online", 00:16:28.666 "raid_level": "raid5f", 00:16:28.666 "superblock": true, 00:16:28.666 "num_base_bdevs": 4, 00:16:28.666 "num_base_bdevs_discovered": 3, 00:16:28.666 "num_base_bdevs_operational": 3, 00:16:28.666 "base_bdevs_list": [ 00:16:28.666 { 00:16:28.666 "name": null, 00:16:28.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.666 "is_configured": false, 00:16:28.666 "data_offset": 0, 00:16:28.666 "data_size": 63488 00:16:28.666 }, 00:16:28.666 { 00:16:28.666 "name": "BaseBdev2", 00:16:28.666 "uuid": "bcf36f98-1ce1-5e56-a148-7c6f91e9d728", 00:16:28.666 "is_configured": true, 00:16:28.666 "data_offset": 2048, 00:16:28.666 "data_size": 63488 00:16:28.666 }, 00:16:28.666 { 00:16:28.666 "name": "BaseBdev3", 00:16:28.666 "uuid": "7db79f0c-54cb-5b2b-b2d0-f4470888121f", 00:16:28.666 "is_configured": true, 00:16:28.666 "data_offset": 2048, 00:16:28.666 "data_size": 63488 00:16:28.666 }, 00:16:28.667 { 00:16:28.667 "name": "BaseBdev4", 00:16:28.667 "uuid": "f721faf8-287d-51fb-8b81-ffb17a57fd12", 00:16:28.667 "is_configured": true, 00:16:28.667 "data_offset": 2048, 00:16:28.667 "data_size": 63488 00:16:28.667 } 00:16:28.667 ] 00:16:28.667 }' 00:16:28.667 03:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.667 03:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.927 03:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:28.927 03:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.927 03:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:28.927 03:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:28.927 03:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.927 03:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.927 03:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.927 03:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.927 03:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.927 03:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.927 03:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.927 "name": "raid_bdev1", 00:16:28.927 "uuid": "aa961aa2-d1e1-41d3-b692-bfdc1a4307a0", 00:16:28.927 "strip_size_kb": 64, 00:16:28.927 "state": "online", 00:16:28.927 "raid_level": "raid5f", 00:16:28.927 "superblock": true, 00:16:28.927 "num_base_bdevs": 4, 00:16:28.927 "num_base_bdevs_discovered": 3, 00:16:28.927 "num_base_bdevs_operational": 3, 00:16:28.927 "base_bdevs_list": [ 00:16:28.927 { 00:16:28.927 "name": null, 00:16:28.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.927 "is_configured": false, 00:16:28.927 "data_offset": 0, 00:16:28.927 "data_size": 63488 00:16:28.927 }, 00:16:28.927 { 00:16:28.927 "name": "BaseBdev2", 00:16:28.927 "uuid": "bcf36f98-1ce1-5e56-a148-7c6f91e9d728", 00:16:28.927 "is_configured": true, 00:16:28.927 "data_offset": 2048, 00:16:28.927 "data_size": 63488 00:16:28.927 }, 00:16:28.927 { 00:16:28.927 "name": "BaseBdev3", 00:16:28.927 "uuid": "7db79f0c-54cb-5b2b-b2d0-f4470888121f", 00:16:28.927 "is_configured": true, 00:16:28.927 "data_offset": 2048, 00:16:28.927 "data_size": 63488 00:16:28.927 }, 00:16:28.927 { 00:16:28.927 "name": "BaseBdev4", 00:16:28.927 "uuid": "f721faf8-287d-51fb-8b81-ffb17a57fd12", 00:16:28.927 "is_configured": true, 00:16:28.927 "data_offset": 2048, 00:16:28.927 "data_size": 63488 00:16:28.927 } 00:16:28.927 ] 00:16:28.927 }' 00:16:28.927 03:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.188 03:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:29.188 03:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.188 03:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:29.188 03:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:29.188 03:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.188 03:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.188 03:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.188 03:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:29.188 03:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.188 03:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.188 [2024-11-21 03:25:16.588664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:29.188 [2024-11-21 03:25:16.588720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.188 [2024-11-21 03:25:16.588746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:16:29.188 [2024-11-21 03:25:16.588757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.188 [2024-11-21 03:25:16.589260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.188 [2024-11-21 03:25:16.589279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:29.188 [2024-11-21 03:25:16.589363] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:29.188 [2024-11-21 03:25:16.589377] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:29.188 [2024-11-21 03:25:16.589392] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:29.188 [2024-11-21 03:25:16.589403] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:29.188 BaseBdev1 00:16:29.188 03:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.188 03:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:30.128 03:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:30.128 03:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.128 03:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.128 03:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.128 03:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.128 03:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:30.128 03:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.128 03:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.128 03:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.128 03:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.129 03:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.129 03:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.129 03:25:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.129 03:25:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.129 03:25:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.129 03:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.129 "name": "raid_bdev1", 00:16:30.129 "uuid": "aa961aa2-d1e1-41d3-b692-bfdc1a4307a0", 00:16:30.129 "strip_size_kb": 64, 00:16:30.129 "state": "online", 00:16:30.129 "raid_level": "raid5f", 00:16:30.129 "superblock": true, 00:16:30.129 "num_base_bdevs": 4, 00:16:30.129 "num_base_bdevs_discovered": 3, 00:16:30.129 "num_base_bdevs_operational": 3, 00:16:30.129 "base_bdevs_list": [ 00:16:30.129 { 00:16:30.129 "name": null, 00:16:30.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.129 "is_configured": false, 00:16:30.129 "data_offset": 0, 00:16:30.129 "data_size": 63488 00:16:30.129 }, 00:16:30.129 { 00:16:30.129 "name": "BaseBdev2", 00:16:30.129 "uuid": "bcf36f98-1ce1-5e56-a148-7c6f91e9d728", 00:16:30.129 "is_configured": true, 00:16:30.129 "data_offset": 2048, 00:16:30.129 "data_size": 63488 00:16:30.129 }, 00:16:30.129 { 00:16:30.129 "name": "BaseBdev3", 00:16:30.129 "uuid": "7db79f0c-54cb-5b2b-b2d0-f4470888121f", 00:16:30.129 "is_configured": true, 00:16:30.129 "data_offset": 2048, 00:16:30.129 "data_size": 63488 00:16:30.129 }, 00:16:30.129 { 00:16:30.129 "name": "BaseBdev4", 00:16:30.129 "uuid": "f721faf8-287d-51fb-8b81-ffb17a57fd12", 00:16:30.129 "is_configured": true, 00:16:30.129 "data_offset": 2048, 00:16:30.129 "data_size": 63488 00:16:30.129 } 00:16:30.129 ] 00:16:30.129 }' 00:16:30.129 03:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.129 03:25:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.699 03:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:30.699 03:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.699 03:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:30.699 03:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:30.699 03:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.699 03:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.699 03:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.699 03:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.699 03:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.699 03:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.699 03:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.699 "name": "raid_bdev1", 00:16:30.699 "uuid": "aa961aa2-d1e1-41d3-b692-bfdc1a4307a0", 00:16:30.699 "strip_size_kb": 64, 00:16:30.699 "state": "online", 00:16:30.699 "raid_level": "raid5f", 00:16:30.699 "superblock": true, 00:16:30.699 "num_base_bdevs": 4, 00:16:30.699 "num_base_bdevs_discovered": 3, 00:16:30.699 "num_base_bdevs_operational": 3, 00:16:30.699 "base_bdevs_list": [ 00:16:30.699 { 00:16:30.699 "name": null, 00:16:30.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.699 "is_configured": false, 00:16:30.699 "data_offset": 0, 00:16:30.699 "data_size": 63488 00:16:30.699 }, 00:16:30.699 { 00:16:30.699 "name": "BaseBdev2", 00:16:30.699 "uuid": "bcf36f98-1ce1-5e56-a148-7c6f91e9d728", 00:16:30.699 "is_configured": true, 00:16:30.699 "data_offset": 2048, 00:16:30.699 "data_size": 63488 00:16:30.699 }, 00:16:30.699 { 00:16:30.699 "name": "BaseBdev3", 00:16:30.699 "uuid": "7db79f0c-54cb-5b2b-b2d0-f4470888121f", 00:16:30.699 "is_configured": true, 00:16:30.699 "data_offset": 2048, 00:16:30.699 "data_size": 63488 00:16:30.699 }, 00:16:30.699 { 00:16:30.699 "name": "BaseBdev4", 00:16:30.699 "uuid": "f721faf8-287d-51fb-8b81-ffb17a57fd12", 00:16:30.699 "is_configured": true, 00:16:30.699 "data_offset": 2048, 00:16:30.699 "data_size": 63488 00:16:30.699 } 00:16:30.699 ] 00:16:30.699 }' 00:16:30.699 03:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.699 03:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:30.699 03:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.699 03:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:30.699 03:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:30.699 03:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:30.699 03:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:30.699 03:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:30.699 03:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:30.699 03:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:30.699 03:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:30.699 03:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:30.699 03:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.699 03:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.699 [2024-11-21 03:25:18.157090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:30.699 [2024-11-21 03:25:18.157224] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:30.699 [2024-11-21 03:25:18.157245] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:30.699 request: 00:16:30.699 { 00:16:30.699 "base_bdev": "BaseBdev1", 00:16:30.699 "raid_bdev": "raid_bdev1", 00:16:30.700 "method": "bdev_raid_add_base_bdev", 00:16:30.700 "req_id": 1 00:16:30.700 } 00:16:30.700 Got JSON-RPC error response 00:16:30.700 response: 00:16:30.700 { 00:16:30.700 "code": -22, 00:16:30.700 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:30.700 } 00:16:30.700 03:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:30.700 03:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:30.700 03:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:30.700 03:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:30.700 03:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:30.700 03:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:31.638 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:31.638 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.638 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.638 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.638 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.638 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:31.638 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.638 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.638 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.638 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.638 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.638 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.638 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.638 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.638 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.896 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.896 "name": "raid_bdev1", 00:16:31.896 "uuid": "aa961aa2-d1e1-41d3-b692-bfdc1a4307a0", 00:16:31.896 "strip_size_kb": 64, 00:16:31.896 "state": "online", 00:16:31.896 "raid_level": "raid5f", 00:16:31.896 "superblock": true, 00:16:31.896 "num_base_bdevs": 4, 00:16:31.896 "num_base_bdevs_discovered": 3, 00:16:31.896 "num_base_bdevs_operational": 3, 00:16:31.896 "base_bdevs_list": [ 00:16:31.896 { 00:16:31.896 "name": null, 00:16:31.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.896 "is_configured": false, 00:16:31.896 "data_offset": 0, 00:16:31.896 "data_size": 63488 00:16:31.896 }, 00:16:31.896 { 00:16:31.896 "name": "BaseBdev2", 00:16:31.896 "uuid": "bcf36f98-1ce1-5e56-a148-7c6f91e9d728", 00:16:31.896 "is_configured": true, 00:16:31.896 "data_offset": 2048, 00:16:31.896 "data_size": 63488 00:16:31.896 }, 00:16:31.896 { 00:16:31.896 "name": "BaseBdev3", 00:16:31.896 "uuid": "7db79f0c-54cb-5b2b-b2d0-f4470888121f", 00:16:31.896 "is_configured": true, 00:16:31.896 "data_offset": 2048, 00:16:31.896 "data_size": 63488 00:16:31.896 }, 00:16:31.896 { 00:16:31.896 "name": "BaseBdev4", 00:16:31.896 "uuid": "f721faf8-287d-51fb-8b81-ffb17a57fd12", 00:16:31.896 "is_configured": true, 00:16:31.896 "data_offset": 2048, 00:16:31.896 "data_size": 63488 00:16:31.896 } 00:16:31.896 ] 00:16:31.896 }' 00:16:31.896 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.896 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.155 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:32.155 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.155 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:32.155 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:32.155 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.155 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.155 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.155 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.155 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.155 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.155 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.155 "name": "raid_bdev1", 00:16:32.155 "uuid": "aa961aa2-d1e1-41d3-b692-bfdc1a4307a0", 00:16:32.155 "strip_size_kb": 64, 00:16:32.155 "state": "online", 00:16:32.155 "raid_level": "raid5f", 00:16:32.155 "superblock": true, 00:16:32.155 "num_base_bdevs": 4, 00:16:32.155 "num_base_bdevs_discovered": 3, 00:16:32.155 "num_base_bdevs_operational": 3, 00:16:32.155 "base_bdevs_list": [ 00:16:32.155 { 00:16:32.155 "name": null, 00:16:32.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.155 "is_configured": false, 00:16:32.155 "data_offset": 0, 00:16:32.155 "data_size": 63488 00:16:32.155 }, 00:16:32.155 { 00:16:32.155 "name": "BaseBdev2", 00:16:32.155 "uuid": "bcf36f98-1ce1-5e56-a148-7c6f91e9d728", 00:16:32.155 "is_configured": true, 00:16:32.155 "data_offset": 2048, 00:16:32.155 "data_size": 63488 00:16:32.155 }, 00:16:32.155 { 00:16:32.155 "name": "BaseBdev3", 00:16:32.155 "uuid": "7db79f0c-54cb-5b2b-b2d0-f4470888121f", 00:16:32.155 "is_configured": true, 00:16:32.155 "data_offset": 2048, 00:16:32.155 "data_size": 63488 00:16:32.155 }, 00:16:32.155 { 00:16:32.155 "name": "BaseBdev4", 00:16:32.155 "uuid": "f721faf8-287d-51fb-8b81-ffb17a57fd12", 00:16:32.155 "is_configured": true, 00:16:32.155 "data_offset": 2048, 00:16:32.155 "data_size": 63488 00:16:32.155 } 00:16:32.155 ] 00:16:32.155 }' 00:16:32.155 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.413 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:32.413 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.413 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:32.413 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 97555 00:16:32.413 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 97555 ']' 00:16:32.413 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 97555 00:16:32.413 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:32.414 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:32.414 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97555 00:16:32.414 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:32.414 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:32.414 killing process with pid 97555 00:16:32.414 Received shutdown signal, test time was about 60.000000 seconds 00:16:32.414 00:16:32.414 Latency(us) 00:16:32.414 [2024-11-21T03:25:19.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.414 [2024-11-21T03:25:19.980Z] =================================================================================================================== 00:16:32.414 [2024-11-21T03:25:19.980Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:32.414 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97555' 00:16:32.414 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 97555 00:16:32.414 [2024-11-21 03:25:19.839349] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:32.414 [2024-11-21 03:25:19.839461] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.414 03:25:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 97555 00:16:32.414 [2024-11-21 03:25:19.839534] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.414 [2024-11-21 03:25:19.839548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:32.414 [2024-11-21 03:25:19.934133] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:32.981 03:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:32.981 00:16:32.981 real 0m25.443s 00:16:32.981 user 0m32.426s 00:16:32.981 sys 0m3.071s 00:16:32.981 ************************************ 00:16:32.981 END TEST raid5f_rebuild_test_sb 00:16:32.981 03:25:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:32.981 03:25:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.981 ************************************ 00:16:32.981 03:25:20 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:16:32.981 03:25:20 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:16:32.981 03:25:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:32.981 03:25:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:32.981 03:25:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:32.981 ************************************ 00:16:32.981 START TEST raid_state_function_test_sb_4k 00:16:32.981 ************************************ 00:16:32.981 03:25:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:16:32.981 03:25:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:32.981 03:25:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:32.981 03:25:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:32.981 03:25:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:32.981 03:25:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:32.981 03:25:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:32.981 03:25:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:32.981 03:25:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:32.981 03:25:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:32.981 03:25:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:32.982 03:25:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:32.982 03:25:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:32.982 03:25:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:32.982 03:25:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:32.982 03:25:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:32.982 03:25:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:32.982 03:25:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:32.982 03:25:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:32.982 03:25:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:32.982 03:25:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:32.982 03:25:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:32.982 03:25:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:32.982 03:25:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=98350 00:16:32.982 03:25:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:32.982 03:25:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 98350' 00:16:32.982 Process raid pid: 98350 00:16:32.982 03:25:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 98350 00:16:32.982 03:25:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 98350 ']' 00:16:32.982 03:25:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.982 03:25:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:32.982 03:25:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.982 03:25:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:32.982 03:25:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.982 [2024-11-21 03:25:20.439004] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:16:32.982 [2024-11-21 03:25:20.439660] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.241 [2024-11-21 03:25:20.577964] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:33.241 [2024-11-21 03:25:20.616564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.241 [2024-11-21 03:25:20.657850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.241 [2024-11-21 03:25:20.734006] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:33.241 [2024-11-21 03:25:20.734071] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:33.810 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:33.810 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:16:33.810 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:33.810 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.810 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:33.810 [2024-11-21 03:25:21.267679] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:33.810 [2024-11-21 03:25:21.267740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:33.810 [2024-11-21 03:25:21.267755] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:33.810 [2024-11-21 03:25:21.267765] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:33.810 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.810 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:33.810 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:33.810 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:33.810 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:33.810 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:33.810 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:33.810 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.810 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.810 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.810 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.810 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.810 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.810 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.810 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:33.810 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.810 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.810 "name": "Existed_Raid", 00:16:33.810 "uuid": "9dfd87ed-26bf-4359-ac93-e56d653c5cf9", 00:16:33.810 "strip_size_kb": 0, 00:16:33.810 "state": "configuring", 00:16:33.810 "raid_level": "raid1", 00:16:33.810 "superblock": true, 00:16:33.810 "num_base_bdevs": 2, 00:16:33.810 "num_base_bdevs_discovered": 0, 00:16:33.810 "num_base_bdevs_operational": 2, 00:16:33.810 "base_bdevs_list": [ 00:16:33.810 { 00:16:33.810 "name": "BaseBdev1", 00:16:33.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.810 "is_configured": false, 00:16:33.810 "data_offset": 0, 00:16:33.810 "data_size": 0 00:16:33.810 }, 00:16:33.810 { 00:16:33.810 "name": "BaseBdev2", 00:16:33.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.810 "is_configured": false, 00:16:33.810 "data_offset": 0, 00:16:33.810 "data_size": 0 00:16:33.811 } 00:16:33.811 ] 00:16:33.811 }' 00:16:33.811 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.811 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.381 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:34.381 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.381 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.381 [2024-11-21 03:25:21.747658] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:34.381 [2024-11-21 03:25:21.747759] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:16:34.381 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.381 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:34.381 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.381 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.381 [2024-11-21 03:25:21.759685] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:34.382 [2024-11-21 03:25:21.759762] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:34.382 [2024-11-21 03:25:21.759795] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:34.382 [2024-11-21 03:25:21.759819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.382 [2024-11-21 03:25:21.786735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:34.382 BaseBdev1 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.382 [ 00:16:34.382 { 00:16:34.382 "name": "BaseBdev1", 00:16:34.382 "aliases": [ 00:16:34.382 "317df5ff-b891-40b8-8ead-0eb2ec35682b" 00:16:34.382 ], 00:16:34.382 "product_name": "Malloc disk", 00:16:34.382 "block_size": 4096, 00:16:34.382 "num_blocks": 8192, 00:16:34.382 "uuid": "317df5ff-b891-40b8-8ead-0eb2ec35682b", 00:16:34.382 "assigned_rate_limits": { 00:16:34.382 "rw_ios_per_sec": 0, 00:16:34.382 "rw_mbytes_per_sec": 0, 00:16:34.382 "r_mbytes_per_sec": 0, 00:16:34.382 "w_mbytes_per_sec": 0 00:16:34.382 }, 00:16:34.382 "claimed": true, 00:16:34.382 "claim_type": "exclusive_write", 00:16:34.382 "zoned": false, 00:16:34.382 "supported_io_types": { 00:16:34.382 "read": true, 00:16:34.382 "write": true, 00:16:34.382 "unmap": true, 00:16:34.382 "flush": true, 00:16:34.382 "reset": true, 00:16:34.382 "nvme_admin": false, 00:16:34.382 "nvme_io": false, 00:16:34.382 "nvme_io_md": false, 00:16:34.382 "write_zeroes": true, 00:16:34.382 "zcopy": true, 00:16:34.382 "get_zone_info": false, 00:16:34.382 "zone_management": false, 00:16:34.382 "zone_append": false, 00:16:34.382 "compare": false, 00:16:34.382 "compare_and_write": false, 00:16:34.382 "abort": true, 00:16:34.382 "seek_hole": false, 00:16:34.382 "seek_data": false, 00:16:34.382 "copy": true, 00:16:34.382 "nvme_iov_md": false 00:16:34.382 }, 00:16:34.382 "memory_domains": [ 00:16:34.382 { 00:16:34.382 "dma_device_id": "system", 00:16:34.382 "dma_device_type": 1 00:16:34.382 }, 00:16:34.382 { 00:16:34.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.382 "dma_device_type": 2 00:16:34.382 } 00:16:34.382 ], 00:16:34.382 "driver_specific": {} 00:16:34.382 } 00:16:34.382 ] 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.382 "name": "Existed_Raid", 00:16:34.382 "uuid": "1f4b0822-ecc0-4325-846a-c8929ee71bf0", 00:16:34.382 "strip_size_kb": 0, 00:16:34.382 "state": "configuring", 00:16:34.382 "raid_level": "raid1", 00:16:34.382 "superblock": true, 00:16:34.382 "num_base_bdevs": 2, 00:16:34.382 "num_base_bdevs_discovered": 1, 00:16:34.382 "num_base_bdevs_operational": 2, 00:16:34.382 "base_bdevs_list": [ 00:16:34.382 { 00:16:34.382 "name": "BaseBdev1", 00:16:34.382 "uuid": "317df5ff-b891-40b8-8ead-0eb2ec35682b", 00:16:34.382 "is_configured": true, 00:16:34.382 "data_offset": 256, 00:16:34.382 "data_size": 7936 00:16:34.382 }, 00:16:34.382 { 00:16:34.382 "name": "BaseBdev2", 00:16:34.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.382 "is_configured": false, 00:16:34.382 "data_offset": 0, 00:16:34.382 "data_size": 0 00:16:34.382 } 00:16:34.382 ] 00:16:34.382 }' 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.382 03:25:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.953 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:34.953 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.953 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.953 [2024-11-21 03:25:22.246855] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:34.953 [2024-11-21 03:25:22.246905] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:34.953 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.953 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:34.953 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.953 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.953 [2024-11-21 03:25:22.254904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:34.953 [2024-11-21 03:25:22.257073] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:34.953 [2024-11-21 03:25:22.257160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:34.953 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.953 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:34.953 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:34.953 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:34.953 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.953 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.953 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:34.953 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:34.953 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:34.953 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.953 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.953 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.953 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.953 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.953 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.953 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.953 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.953 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.953 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.953 "name": "Existed_Raid", 00:16:34.953 "uuid": "ae00684f-a5f4-4703-81b2-a6c7508d2718", 00:16:34.953 "strip_size_kb": 0, 00:16:34.953 "state": "configuring", 00:16:34.953 "raid_level": "raid1", 00:16:34.953 "superblock": true, 00:16:34.953 "num_base_bdevs": 2, 00:16:34.953 "num_base_bdevs_discovered": 1, 00:16:34.953 "num_base_bdevs_operational": 2, 00:16:34.953 "base_bdevs_list": [ 00:16:34.953 { 00:16:34.953 "name": "BaseBdev1", 00:16:34.953 "uuid": "317df5ff-b891-40b8-8ead-0eb2ec35682b", 00:16:34.953 "is_configured": true, 00:16:34.953 "data_offset": 256, 00:16:34.953 "data_size": 7936 00:16:34.953 }, 00:16:34.953 { 00:16:34.953 "name": "BaseBdev2", 00:16:34.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.953 "is_configured": false, 00:16:34.953 "data_offset": 0, 00:16:34.953 "data_size": 0 00:16:34.953 } 00:16:34.953 ] 00:16:34.953 }' 00:16:34.953 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.953 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.214 [2024-11-21 03:25:22.683747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:35.214 [2024-11-21 03:25:22.684085] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:35.214 [2024-11-21 03:25:22.684156] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:35.214 BaseBdev2 00:16:35.214 [2024-11-21 03:25:22.684514] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:35.214 [2024-11-21 03:25:22.684758] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:35.214 [2024-11-21 03:25:22.684806] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:16:35.214 [2024-11-21 03:25:22.684994] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.214 [ 00:16:35.214 { 00:16:35.214 "name": "BaseBdev2", 00:16:35.214 "aliases": [ 00:16:35.214 "afb85471-6be7-462d-9435-4b189289a5e1" 00:16:35.214 ], 00:16:35.214 "product_name": "Malloc disk", 00:16:35.214 "block_size": 4096, 00:16:35.214 "num_blocks": 8192, 00:16:35.214 "uuid": "afb85471-6be7-462d-9435-4b189289a5e1", 00:16:35.214 "assigned_rate_limits": { 00:16:35.214 "rw_ios_per_sec": 0, 00:16:35.214 "rw_mbytes_per_sec": 0, 00:16:35.214 "r_mbytes_per_sec": 0, 00:16:35.214 "w_mbytes_per_sec": 0 00:16:35.214 }, 00:16:35.214 "claimed": true, 00:16:35.214 "claim_type": "exclusive_write", 00:16:35.214 "zoned": false, 00:16:35.214 "supported_io_types": { 00:16:35.214 "read": true, 00:16:35.214 "write": true, 00:16:35.214 "unmap": true, 00:16:35.214 "flush": true, 00:16:35.214 "reset": true, 00:16:35.214 "nvme_admin": false, 00:16:35.214 "nvme_io": false, 00:16:35.214 "nvme_io_md": false, 00:16:35.214 "write_zeroes": true, 00:16:35.214 "zcopy": true, 00:16:35.214 "get_zone_info": false, 00:16:35.214 "zone_management": false, 00:16:35.214 "zone_append": false, 00:16:35.214 "compare": false, 00:16:35.214 "compare_and_write": false, 00:16:35.214 "abort": true, 00:16:35.214 "seek_hole": false, 00:16:35.214 "seek_data": false, 00:16:35.214 "copy": true, 00:16:35.214 "nvme_iov_md": false 00:16:35.214 }, 00:16:35.214 "memory_domains": [ 00:16:35.214 { 00:16:35.214 "dma_device_id": "system", 00:16:35.214 "dma_device_type": 1 00:16:35.214 }, 00:16:35.214 { 00:16:35.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.214 "dma_device_type": 2 00:16:35.214 } 00:16:35.214 ], 00:16:35.214 "driver_specific": {} 00:16:35.214 } 00:16:35.214 ] 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.214 "name": "Existed_Raid", 00:16:35.214 "uuid": "ae00684f-a5f4-4703-81b2-a6c7508d2718", 00:16:35.214 "strip_size_kb": 0, 00:16:35.214 "state": "online", 00:16:35.214 "raid_level": "raid1", 00:16:35.214 "superblock": true, 00:16:35.214 "num_base_bdevs": 2, 00:16:35.214 "num_base_bdevs_discovered": 2, 00:16:35.214 "num_base_bdevs_operational": 2, 00:16:35.214 "base_bdevs_list": [ 00:16:35.214 { 00:16:35.214 "name": "BaseBdev1", 00:16:35.214 "uuid": "317df5ff-b891-40b8-8ead-0eb2ec35682b", 00:16:35.214 "is_configured": true, 00:16:35.214 "data_offset": 256, 00:16:35.214 "data_size": 7936 00:16:35.214 }, 00:16:35.214 { 00:16:35.214 "name": "BaseBdev2", 00:16:35.214 "uuid": "afb85471-6be7-462d-9435-4b189289a5e1", 00:16:35.214 "is_configured": true, 00:16:35.214 "data_offset": 256, 00:16:35.214 "data_size": 7936 00:16:35.214 } 00:16:35.214 ] 00:16:35.214 }' 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.214 03:25:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.785 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:35.785 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:35.785 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:35.785 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:35.785 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:35.785 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:35.785 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:35.785 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:35.785 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.785 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.785 [2024-11-21 03:25:23.128158] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:35.785 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.785 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:35.785 "name": "Existed_Raid", 00:16:35.785 "aliases": [ 00:16:35.785 "ae00684f-a5f4-4703-81b2-a6c7508d2718" 00:16:35.785 ], 00:16:35.785 "product_name": "Raid Volume", 00:16:35.785 "block_size": 4096, 00:16:35.785 "num_blocks": 7936, 00:16:35.785 "uuid": "ae00684f-a5f4-4703-81b2-a6c7508d2718", 00:16:35.785 "assigned_rate_limits": { 00:16:35.785 "rw_ios_per_sec": 0, 00:16:35.785 "rw_mbytes_per_sec": 0, 00:16:35.785 "r_mbytes_per_sec": 0, 00:16:35.785 "w_mbytes_per_sec": 0 00:16:35.785 }, 00:16:35.785 "claimed": false, 00:16:35.785 "zoned": false, 00:16:35.785 "supported_io_types": { 00:16:35.785 "read": true, 00:16:35.785 "write": true, 00:16:35.785 "unmap": false, 00:16:35.785 "flush": false, 00:16:35.785 "reset": true, 00:16:35.785 "nvme_admin": false, 00:16:35.785 "nvme_io": false, 00:16:35.785 "nvme_io_md": false, 00:16:35.785 "write_zeroes": true, 00:16:35.785 "zcopy": false, 00:16:35.785 "get_zone_info": false, 00:16:35.785 "zone_management": false, 00:16:35.785 "zone_append": false, 00:16:35.785 "compare": false, 00:16:35.785 "compare_and_write": false, 00:16:35.785 "abort": false, 00:16:35.785 "seek_hole": false, 00:16:35.785 "seek_data": false, 00:16:35.785 "copy": false, 00:16:35.785 "nvme_iov_md": false 00:16:35.785 }, 00:16:35.785 "memory_domains": [ 00:16:35.785 { 00:16:35.785 "dma_device_id": "system", 00:16:35.785 "dma_device_type": 1 00:16:35.785 }, 00:16:35.785 { 00:16:35.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.785 "dma_device_type": 2 00:16:35.785 }, 00:16:35.785 { 00:16:35.785 "dma_device_id": "system", 00:16:35.785 "dma_device_type": 1 00:16:35.785 }, 00:16:35.785 { 00:16:35.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.785 "dma_device_type": 2 00:16:35.785 } 00:16:35.785 ], 00:16:35.785 "driver_specific": { 00:16:35.785 "raid": { 00:16:35.785 "uuid": "ae00684f-a5f4-4703-81b2-a6c7508d2718", 00:16:35.785 "strip_size_kb": 0, 00:16:35.785 "state": "online", 00:16:35.785 "raid_level": "raid1", 00:16:35.785 "superblock": true, 00:16:35.785 "num_base_bdevs": 2, 00:16:35.785 "num_base_bdevs_discovered": 2, 00:16:35.785 "num_base_bdevs_operational": 2, 00:16:35.785 "base_bdevs_list": [ 00:16:35.785 { 00:16:35.785 "name": "BaseBdev1", 00:16:35.785 "uuid": "317df5ff-b891-40b8-8ead-0eb2ec35682b", 00:16:35.785 "is_configured": true, 00:16:35.785 "data_offset": 256, 00:16:35.785 "data_size": 7936 00:16:35.785 }, 00:16:35.785 { 00:16:35.785 "name": "BaseBdev2", 00:16:35.785 "uuid": "afb85471-6be7-462d-9435-4b189289a5e1", 00:16:35.785 "is_configured": true, 00:16:35.785 "data_offset": 256, 00:16:35.785 "data_size": 7936 00:16:35.785 } 00:16:35.785 ] 00:16:35.785 } 00:16:35.785 } 00:16:35.785 }' 00:16:35.785 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:35.785 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:35.785 BaseBdev2' 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.786 [2024-11-21 03:25:23.288003] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.786 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.046 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.046 "name": "Existed_Raid", 00:16:36.046 "uuid": "ae00684f-a5f4-4703-81b2-a6c7508d2718", 00:16:36.046 "strip_size_kb": 0, 00:16:36.046 "state": "online", 00:16:36.046 "raid_level": "raid1", 00:16:36.046 "superblock": true, 00:16:36.046 "num_base_bdevs": 2, 00:16:36.046 "num_base_bdevs_discovered": 1, 00:16:36.046 "num_base_bdevs_operational": 1, 00:16:36.046 "base_bdevs_list": [ 00:16:36.046 { 00:16:36.046 "name": null, 00:16:36.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.046 "is_configured": false, 00:16:36.046 "data_offset": 0, 00:16:36.046 "data_size": 7936 00:16:36.046 }, 00:16:36.046 { 00:16:36.046 "name": "BaseBdev2", 00:16:36.046 "uuid": "afb85471-6be7-462d-9435-4b189289a5e1", 00:16:36.046 "is_configured": true, 00:16:36.046 "data_offset": 256, 00:16:36.046 "data_size": 7936 00:16:36.046 } 00:16:36.046 ] 00:16:36.046 }' 00:16:36.046 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.046 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.306 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:36.306 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:36.306 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.306 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.306 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:36.306 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.306 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.306 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:36.306 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:36.306 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:36.306 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.306 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.306 [2024-11-21 03:25:23.824185] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:36.306 [2024-11-21 03:25:23.824358] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:36.306 [2024-11-21 03:25:23.845744] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:36.306 [2024-11-21 03:25:23.845874] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:36.306 [2024-11-21 03:25:23.845920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:16:36.306 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.306 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:36.306 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:36.306 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.306 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.306 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.306 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:36.306 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.566 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:36.566 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:36.566 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:36.566 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 98350 00:16:36.566 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 98350 ']' 00:16:36.566 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 98350 00:16:36.566 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:16:36.566 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:36.566 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98350 00:16:36.566 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:36.566 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:36.566 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98350' 00:16:36.566 killing process with pid 98350 00:16:36.566 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 98350 00:16:36.566 [2024-11-21 03:25:23.941865] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:36.566 03:25:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 98350 00:16:36.566 [2024-11-21 03:25:23.943497] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:36.827 03:25:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:16:36.827 00:16:36.827 real 0m3.941s 00:16:36.827 user 0m5.969s 00:16:36.827 sys 0m0.939s 00:16:36.827 ************************************ 00:16:36.827 END TEST raid_state_function_test_sb_4k 00:16:36.827 ************************************ 00:16:36.827 03:25:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:36.827 03:25:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.827 03:25:24 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:16:36.827 03:25:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:36.827 03:25:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:36.827 03:25:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:36.827 ************************************ 00:16:36.827 START TEST raid_superblock_test_4k 00:16:36.827 ************************************ 00:16:36.827 03:25:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:16:36.827 03:25:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:36.827 03:25:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:36.827 03:25:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:36.827 03:25:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:36.827 03:25:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:36.827 03:25:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:36.827 03:25:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:36.827 03:25:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:36.827 03:25:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:36.827 03:25:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:36.827 03:25:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:36.827 03:25:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:36.827 03:25:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:36.827 03:25:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:36.827 03:25:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:36.827 03:25:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=98592 00:16:36.827 03:25:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:36.827 03:25:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 98592 00:16:36.827 03:25:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 98592 ']' 00:16:36.827 03:25:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.827 03:25:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:36.827 03:25:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.827 03:25:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:36.827 03:25:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.087 [2024-11-21 03:25:24.452380] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:16:37.087 [2024-11-21 03:25:24.452499] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98592 ] 00:16:37.087 [2024-11-21 03:25:24.587833] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:37.087 [2024-11-21 03:25:24.627778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.348 [2024-11-21 03:25:24.668918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.348 [2024-11-21 03:25:24.747299] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:37.348 [2024-11-21 03:25:24.747352] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:37.918 03:25:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:37.918 03:25:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:16:37.918 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:37.918 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:37.918 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:37.918 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:37.918 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:37.918 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:37.918 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:37.918 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:37.918 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:16:37.918 03:25:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.918 03:25:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.918 malloc1 00:16:37.918 03:25:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.918 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:37.918 03:25:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.918 03:25:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.918 [2024-11-21 03:25:25.303940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:37.918 [2024-11-21 03:25:25.304108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.918 [2024-11-21 03:25:25.304167] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:37.918 [2024-11-21 03:25:25.304217] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.918 [2024-11-21 03:25:25.306649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.918 [2024-11-21 03:25:25.306733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:37.918 pt1 00:16:37.918 03:25:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.918 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:37.918 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:37.918 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:37.918 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:37.918 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:37.918 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:37.918 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:37.918 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:37.918 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:16:37.918 03:25:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.918 03:25:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.918 malloc2 00:16:37.918 03:25:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.918 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:37.919 03:25:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.919 03:25:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.919 [2024-11-21 03:25:25.342963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:37.919 [2024-11-21 03:25:25.343032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.919 [2024-11-21 03:25:25.343055] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:37.919 [2024-11-21 03:25:25.343065] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.919 [2024-11-21 03:25:25.345338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.919 [2024-11-21 03:25:25.345429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:37.919 pt2 00:16:37.919 03:25:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.919 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:37.919 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:37.919 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:37.919 03:25:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.919 03:25:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.919 [2024-11-21 03:25:25.354998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:37.919 [2024-11-21 03:25:25.357136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:37.919 [2024-11-21 03:25:25.357300] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:37.919 [2024-11-21 03:25:25.357315] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:37.919 [2024-11-21 03:25:25.357597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:37.919 [2024-11-21 03:25:25.357760] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:37.919 [2024-11-21 03:25:25.357774] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:37.919 [2024-11-21 03:25:25.357902] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.919 03:25:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.919 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:37.919 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.919 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.919 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:37.919 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:37.919 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:37.919 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.919 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.919 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.919 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.919 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.919 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.919 03:25:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.919 03:25:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.919 03:25:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.919 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.919 "name": "raid_bdev1", 00:16:37.919 "uuid": "189a1252-0b41-4869-a737-f4d7134456c8", 00:16:37.919 "strip_size_kb": 0, 00:16:37.919 "state": "online", 00:16:37.919 "raid_level": "raid1", 00:16:37.919 "superblock": true, 00:16:37.919 "num_base_bdevs": 2, 00:16:37.919 "num_base_bdevs_discovered": 2, 00:16:37.919 "num_base_bdevs_operational": 2, 00:16:37.919 "base_bdevs_list": [ 00:16:37.919 { 00:16:37.919 "name": "pt1", 00:16:37.919 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:37.919 "is_configured": true, 00:16:37.919 "data_offset": 256, 00:16:37.919 "data_size": 7936 00:16:37.919 }, 00:16:37.919 { 00:16:37.919 "name": "pt2", 00:16:37.919 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:37.919 "is_configured": true, 00:16:37.919 "data_offset": 256, 00:16:37.919 "data_size": 7936 00:16:37.919 } 00:16:37.919 ] 00:16:37.919 }' 00:16:37.919 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.919 03:25:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.515 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:38.515 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:38.515 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:38.515 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:38.515 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:38.515 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:38.515 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:38.515 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:38.515 03:25:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.515 03:25:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.515 [2024-11-21 03:25:25.799380] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:38.515 03:25:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.515 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:38.515 "name": "raid_bdev1", 00:16:38.515 "aliases": [ 00:16:38.515 "189a1252-0b41-4869-a737-f4d7134456c8" 00:16:38.515 ], 00:16:38.515 "product_name": "Raid Volume", 00:16:38.515 "block_size": 4096, 00:16:38.515 "num_blocks": 7936, 00:16:38.515 "uuid": "189a1252-0b41-4869-a737-f4d7134456c8", 00:16:38.515 "assigned_rate_limits": { 00:16:38.515 "rw_ios_per_sec": 0, 00:16:38.515 "rw_mbytes_per_sec": 0, 00:16:38.515 "r_mbytes_per_sec": 0, 00:16:38.515 "w_mbytes_per_sec": 0 00:16:38.515 }, 00:16:38.515 "claimed": false, 00:16:38.515 "zoned": false, 00:16:38.515 "supported_io_types": { 00:16:38.515 "read": true, 00:16:38.515 "write": true, 00:16:38.515 "unmap": false, 00:16:38.515 "flush": false, 00:16:38.515 "reset": true, 00:16:38.515 "nvme_admin": false, 00:16:38.515 "nvme_io": false, 00:16:38.515 "nvme_io_md": false, 00:16:38.515 "write_zeroes": true, 00:16:38.515 "zcopy": false, 00:16:38.515 "get_zone_info": false, 00:16:38.515 "zone_management": false, 00:16:38.515 "zone_append": false, 00:16:38.515 "compare": false, 00:16:38.515 "compare_and_write": false, 00:16:38.515 "abort": false, 00:16:38.515 "seek_hole": false, 00:16:38.515 "seek_data": false, 00:16:38.515 "copy": false, 00:16:38.515 "nvme_iov_md": false 00:16:38.515 }, 00:16:38.515 "memory_domains": [ 00:16:38.515 { 00:16:38.515 "dma_device_id": "system", 00:16:38.515 "dma_device_type": 1 00:16:38.515 }, 00:16:38.515 { 00:16:38.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.515 "dma_device_type": 2 00:16:38.515 }, 00:16:38.515 { 00:16:38.515 "dma_device_id": "system", 00:16:38.515 "dma_device_type": 1 00:16:38.515 }, 00:16:38.515 { 00:16:38.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.515 "dma_device_type": 2 00:16:38.515 } 00:16:38.515 ], 00:16:38.515 "driver_specific": { 00:16:38.515 "raid": { 00:16:38.515 "uuid": "189a1252-0b41-4869-a737-f4d7134456c8", 00:16:38.515 "strip_size_kb": 0, 00:16:38.515 "state": "online", 00:16:38.515 "raid_level": "raid1", 00:16:38.515 "superblock": true, 00:16:38.515 "num_base_bdevs": 2, 00:16:38.515 "num_base_bdevs_discovered": 2, 00:16:38.515 "num_base_bdevs_operational": 2, 00:16:38.515 "base_bdevs_list": [ 00:16:38.515 { 00:16:38.515 "name": "pt1", 00:16:38.515 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:38.515 "is_configured": true, 00:16:38.515 "data_offset": 256, 00:16:38.515 "data_size": 7936 00:16:38.515 }, 00:16:38.515 { 00:16:38.515 "name": "pt2", 00:16:38.515 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:38.515 "is_configured": true, 00:16:38.515 "data_offset": 256, 00:16:38.515 "data_size": 7936 00:16:38.515 } 00:16:38.515 ] 00:16:38.515 } 00:16:38.515 } 00:16:38.515 }' 00:16:38.515 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:38.515 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:38.515 pt2' 00:16:38.515 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.515 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:38.516 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.516 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:38.516 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.516 03:25:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.516 03:25:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.516 03:25:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.516 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:38.516 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:38.516 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.516 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.516 03:25:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:38.516 03:25:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.516 03:25:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.516 03:25:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.516 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:38.516 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:38.516 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:38.516 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:38.516 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.516 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.516 [2024-11-21 03:25:26.023348] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:38.516 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.516 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=189a1252-0b41-4869-a737-f4d7134456c8 00:16:38.516 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 189a1252-0b41-4869-a737-f4d7134456c8 ']' 00:16:38.516 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:38.516 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.516 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.516 [2024-11-21 03:25:26.051137] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:38.516 [2024-11-21 03:25:26.051166] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:38.516 [2024-11-21 03:25:26.051248] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:38.516 [2024-11-21 03:25:26.051313] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:38.516 [2024-11-21 03:25:26.051328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:38.516 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.516 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.516 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:38.516 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.516 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.516 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.791 [2024-11-21 03:25:26.195203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:38.791 [2024-11-21 03:25:26.197361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:38.791 [2024-11-21 03:25:26.197466] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:38.791 [2024-11-21 03:25:26.197566] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:38.791 [2024-11-21 03:25:26.197640] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:38.791 [2024-11-21 03:25:26.197654] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:16:38.791 request: 00:16:38.791 { 00:16:38.791 "name": "raid_bdev1", 00:16:38.791 "raid_level": "raid1", 00:16:38.791 "base_bdevs": [ 00:16:38.791 "malloc1", 00:16:38.791 "malloc2" 00:16:38.791 ], 00:16:38.791 "superblock": false, 00:16:38.791 "method": "bdev_raid_create", 00:16:38.791 "req_id": 1 00:16:38.791 } 00:16:38.791 Got JSON-RPC error response 00:16:38.791 response: 00:16:38.791 { 00:16:38.791 "code": -17, 00:16:38.791 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:38.791 } 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:38.791 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:38.792 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.792 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.792 [2024-11-21 03:25:26.251196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:38.792 [2024-11-21 03:25:26.251292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.792 [2024-11-21 03:25:26.251328] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:38.792 [2024-11-21 03:25:26.251373] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.792 [2024-11-21 03:25:26.253704] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.792 [2024-11-21 03:25:26.253783] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:38.792 [2024-11-21 03:25:26.253867] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:38.792 [2024-11-21 03:25:26.253952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:38.792 pt1 00:16:38.792 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.792 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:38.792 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.792 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:38.792 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:38.792 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:38.792 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:38.792 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.792 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.792 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.792 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.792 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.792 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.792 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.792 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.792 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.792 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.792 "name": "raid_bdev1", 00:16:38.792 "uuid": "189a1252-0b41-4869-a737-f4d7134456c8", 00:16:38.792 "strip_size_kb": 0, 00:16:38.792 "state": "configuring", 00:16:38.792 "raid_level": "raid1", 00:16:38.792 "superblock": true, 00:16:38.792 "num_base_bdevs": 2, 00:16:38.792 "num_base_bdevs_discovered": 1, 00:16:38.792 "num_base_bdevs_operational": 2, 00:16:38.792 "base_bdevs_list": [ 00:16:38.792 { 00:16:38.792 "name": "pt1", 00:16:38.792 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:38.792 "is_configured": true, 00:16:38.792 "data_offset": 256, 00:16:38.792 "data_size": 7936 00:16:38.792 }, 00:16:38.792 { 00:16:38.792 "name": null, 00:16:38.792 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:38.792 "is_configured": false, 00:16:38.792 "data_offset": 256, 00:16:38.792 "data_size": 7936 00:16:38.792 } 00:16:38.792 ] 00:16:38.792 }' 00:16:38.792 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.792 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.363 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:39.363 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:39.363 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:39.363 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:39.363 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.363 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.363 [2024-11-21 03:25:26.627275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:39.363 [2024-11-21 03:25:26.627379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.363 [2024-11-21 03:25:26.627405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:39.363 [2024-11-21 03:25:26.627417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.363 [2024-11-21 03:25:26.627757] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.363 [2024-11-21 03:25:26.627788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:39.363 [2024-11-21 03:25:26.627842] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:39.363 [2024-11-21 03:25:26.627864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:39.363 [2024-11-21 03:25:26.627941] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:39.363 [2024-11-21 03:25:26.627955] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:39.363 [2024-11-21 03:25:26.628218] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:39.363 [2024-11-21 03:25:26.628359] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:39.363 [2024-11-21 03:25:26.628370] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:39.363 [2024-11-21 03:25:26.628475] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:39.363 pt2 00:16:39.363 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.363 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:39.363 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:39.363 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:39.363 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.363 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.363 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:39.363 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:39.363 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:39.363 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.363 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.363 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.363 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.363 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.363 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.363 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.363 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.363 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.363 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.363 "name": "raid_bdev1", 00:16:39.363 "uuid": "189a1252-0b41-4869-a737-f4d7134456c8", 00:16:39.363 "strip_size_kb": 0, 00:16:39.363 "state": "online", 00:16:39.363 "raid_level": "raid1", 00:16:39.363 "superblock": true, 00:16:39.363 "num_base_bdevs": 2, 00:16:39.363 "num_base_bdevs_discovered": 2, 00:16:39.363 "num_base_bdevs_operational": 2, 00:16:39.363 "base_bdevs_list": [ 00:16:39.363 { 00:16:39.363 "name": "pt1", 00:16:39.363 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:39.363 "is_configured": true, 00:16:39.363 "data_offset": 256, 00:16:39.363 "data_size": 7936 00:16:39.363 }, 00:16:39.363 { 00:16:39.363 "name": "pt2", 00:16:39.363 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:39.363 "is_configured": true, 00:16:39.363 "data_offset": 256, 00:16:39.363 "data_size": 7936 00:16:39.363 } 00:16:39.363 ] 00:16:39.364 }' 00:16:39.364 03:25:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.364 03:25:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.624 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:39.624 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:39.624 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:39.624 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:39.624 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:39.624 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:39.624 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:39.624 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.624 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.624 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:39.624 [2024-11-21 03:25:27.047609] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:39.624 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.624 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:39.624 "name": "raid_bdev1", 00:16:39.624 "aliases": [ 00:16:39.624 "189a1252-0b41-4869-a737-f4d7134456c8" 00:16:39.624 ], 00:16:39.624 "product_name": "Raid Volume", 00:16:39.624 "block_size": 4096, 00:16:39.624 "num_blocks": 7936, 00:16:39.624 "uuid": "189a1252-0b41-4869-a737-f4d7134456c8", 00:16:39.624 "assigned_rate_limits": { 00:16:39.624 "rw_ios_per_sec": 0, 00:16:39.624 "rw_mbytes_per_sec": 0, 00:16:39.624 "r_mbytes_per_sec": 0, 00:16:39.624 "w_mbytes_per_sec": 0 00:16:39.624 }, 00:16:39.624 "claimed": false, 00:16:39.624 "zoned": false, 00:16:39.624 "supported_io_types": { 00:16:39.624 "read": true, 00:16:39.624 "write": true, 00:16:39.624 "unmap": false, 00:16:39.624 "flush": false, 00:16:39.624 "reset": true, 00:16:39.624 "nvme_admin": false, 00:16:39.624 "nvme_io": false, 00:16:39.624 "nvme_io_md": false, 00:16:39.624 "write_zeroes": true, 00:16:39.624 "zcopy": false, 00:16:39.624 "get_zone_info": false, 00:16:39.624 "zone_management": false, 00:16:39.624 "zone_append": false, 00:16:39.624 "compare": false, 00:16:39.624 "compare_and_write": false, 00:16:39.624 "abort": false, 00:16:39.624 "seek_hole": false, 00:16:39.624 "seek_data": false, 00:16:39.624 "copy": false, 00:16:39.624 "nvme_iov_md": false 00:16:39.624 }, 00:16:39.624 "memory_domains": [ 00:16:39.624 { 00:16:39.624 "dma_device_id": "system", 00:16:39.624 "dma_device_type": 1 00:16:39.624 }, 00:16:39.624 { 00:16:39.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.624 "dma_device_type": 2 00:16:39.624 }, 00:16:39.624 { 00:16:39.624 "dma_device_id": "system", 00:16:39.624 "dma_device_type": 1 00:16:39.624 }, 00:16:39.624 { 00:16:39.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.624 "dma_device_type": 2 00:16:39.624 } 00:16:39.624 ], 00:16:39.624 "driver_specific": { 00:16:39.624 "raid": { 00:16:39.624 "uuid": "189a1252-0b41-4869-a737-f4d7134456c8", 00:16:39.624 "strip_size_kb": 0, 00:16:39.624 "state": "online", 00:16:39.624 "raid_level": "raid1", 00:16:39.624 "superblock": true, 00:16:39.624 "num_base_bdevs": 2, 00:16:39.624 "num_base_bdevs_discovered": 2, 00:16:39.624 "num_base_bdevs_operational": 2, 00:16:39.624 "base_bdevs_list": [ 00:16:39.624 { 00:16:39.624 "name": "pt1", 00:16:39.624 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:39.624 "is_configured": true, 00:16:39.624 "data_offset": 256, 00:16:39.624 "data_size": 7936 00:16:39.624 }, 00:16:39.624 { 00:16:39.624 "name": "pt2", 00:16:39.624 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:39.624 "is_configured": true, 00:16:39.624 "data_offset": 256, 00:16:39.624 "data_size": 7936 00:16:39.624 } 00:16:39.624 ] 00:16:39.624 } 00:16:39.624 } 00:16:39.624 }' 00:16:39.625 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:39.625 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:39.625 pt2' 00:16:39.625 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.625 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:39.625 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.625 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:39.625 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.625 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:39.884 [2024-11-21 03:25:27.279683] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 189a1252-0b41-4869-a737-f4d7134456c8 '!=' 189a1252-0b41-4869-a737-f4d7134456c8 ']' 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.884 [2024-11-21 03:25:27.327462] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.884 "name": "raid_bdev1", 00:16:39.884 "uuid": "189a1252-0b41-4869-a737-f4d7134456c8", 00:16:39.884 "strip_size_kb": 0, 00:16:39.884 "state": "online", 00:16:39.884 "raid_level": "raid1", 00:16:39.884 "superblock": true, 00:16:39.884 "num_base_bdevs": 2, 00:16:39.884 "num_base_bdevs_discovered": 1, 00:16:39.884 "num_base_bdevs_operational": 1, 00:16:39.884 "base_bdevs_list": [ 00:16:39.884 { 00:16:39.884 "name": null, 00:16:39.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.884 "is_configured": false, 00:16:39.884 "data_offset": 0, 00:16:39.884 "data_size": 7936 00:16:39.884 }, 00:16:39.884 { 00:16:39.884 "name": "pt2", 00:16:39.884 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:39.884 "is_configured": true, 00:16:39.884 "data_offset": 256, 00:16:39.884 "data_size": 7936 00:16:39.884 } 00:16:39.884 ] 00:16:39.884 }' 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.884 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.453 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:40.453 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.453 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.453 [2024-11-21 03:25:27.719556] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:40.453 [2024-11-21 03:25:27.719629] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:40.453 [2024-11-21 03:25:27.719692] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:40.453 [2024-11-21 03:25:27.719729] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:40.453 [2024-11-21 03:25:27.719741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:40.453 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.453 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.453 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:40.453 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.453 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.453 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.453 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:40.453 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:40.453 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:40.453 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:40.453 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:40.453 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.453 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.453 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.453 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:40.453 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:40.453 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:40.453 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:40.453 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:16:40.453 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:40.453 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.453 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.453 [2024-11-21 03:25:27.795595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:40.453 [2024-11-21 03:25:27.795647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.453 [2024-11-21 03:25:27.795663] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:40.453 [2024-11-21 03:25:27.795676] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.453 [2024-11-21 03:25:27.798058] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.453 [2024-11-21 03:25:27.798143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:40.453 [2024-11-21 03:25:27.798208] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:40.453 [2024-11-21 03:25:27.798243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:40.453 [2024-11-21 03:25:27.798317] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:40.453 [2024-11-21 03:25:27.798331] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:40.453 [2024-11-21 03:25:27.798529] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:16:40.454 [2024-11-21 03:25:27.798656] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:40.454 [2024-11-21 03:25:27.798666] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:40.454 [2024-11-21 03:25:27.798770] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.454 pt2 00:16:40.454 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.454 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:40.454 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.454 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.454 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:40.454 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:40.454 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:40.454 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.454 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.454 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.454 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.454 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.454 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.454 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.454 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.454 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.454 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.454 "name": "raid_bdev1", 00:16:40.454 "uuid": "189a1252-0b41-4869-a737-f4d7134456c8", 00:16:40.454 "strip_size_kb": 0, 00:16:40.454 "state": "online", 00:16:40.454 "raid_level": "raid1", 00:16:40.454 "superblock": true, 00:16:40.454 "num_base_bdevs": 2, 00:16:40.454 "num_base_bdevs_discovered": 1, 00:16:40.454 "num_base_bdevs_operational": 1, 00:16:40.454 "base_bdevs_list": [ 00:16:40.454 { 00:16:40.454 "name": null, 00:16:40.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.454 "is_configured": false, 00:16:40.454 "data_offset": 256, 00:16:40.454 "data_size": 7936 00:16:40.454 }, 00:16:40.454 { 00:16:40.454 "name": "pt2", 00:16:40.454 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:40.454 "is_configured": true, 00:16:40.454 "data_offset": 256, 00:16:40.454 "data_size": 7936 00:16:40.454 } 00:16:40.454 ] 00:16:40.454 }' 00:16:40.454 03:25:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.454 03:25:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.714 03:25:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:40.714 03:25:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.714 03:25:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.714 [2024-11-21 03:25:28.223692] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:40.714 [2024-11-21 03:25:28.223766] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:40.714 [2024-11-21 03:25:28.223838] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:40.714 [2024-11-21 03:25:28.223895] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:40.714 [2024-11-21 03:25:28.223951] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:40.714 03:25:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.714 03:25:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.714 03:25:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:40.714 03:25:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.714 03:25:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.714 03:25:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.973 03:25:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:40.974 03:25:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:40.974 03:25:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:40.974 03:25:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:40.974 03:25:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.974 03:25:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.974 [2024-11-21 03:25:28.287701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:40.974 [2024-11-21 03:25:28.287793] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.974 [2024-11-21 03:25:28.287834] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:40.974 [2024-11-21 03:25:28.287872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.974 [2024-11-21 03:25:28.290255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.974 [2024-11-21 03:25:28.290329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:40.974 [2024-11-21 03:25:28.290416] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:40.974 [2024-11-21 03:25:28.290462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:40.974 [2024-11-21 03:25:28.290572] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:40.974 [2024-11-21 03:25:28.290631] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:40.974 [2024-11-21 03:25:28.290704] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:16:40.974 [2024-11-21 03:25:28.290779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:40.974 [2024-11-21 03:25:28.290917] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:40.974 [2024-11-21 03:25:28.290968] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:40.974 [2024-11-21 03:25:28.291203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:40.974 [2024-11-21 03:25:28.291325] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:40.974 [2024-11-21 03:25:28.291341] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:40.974 [2024-11-21 03:25:28.291447] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.974 pt1 00:16:40.974 03:25:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.974 03:25:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:40.974 03:25:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:40.974 03:25:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.974 03:25:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.974 03:25:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:40.974 03:25:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:40.974 03:25:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:40.974 03:25:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.974 03:25:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.974 03:25:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.974 03:25:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.974 03:25:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.974 03:25:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.974 03:25:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.974 03:25:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.974 03:25:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.974 03:25:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.974 "name": "raid_bdev1", 00:16:40.974 "uuid": "189a1252-0b41-4869-a737-f4d7134456c8", 00:16:40.974 "strip_size_kb": 0, 00:16:40.974 "state": "online", 00:16:40.974 "raid_level": "raid1", 00:16:40.974 "superblock": true, 00:16:40.974 "num_base_bdevs": 2, 00:16:40.974 "num_base_bdevs_discovered": 1, 00:16:40.974 "num_base_bdevs_operational": 1, 00:16:40.974 "base_bdevs_list": [ 00:16:40.974 { 00:16:40.974 "name": null, 00:16:40.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.974 "is_configured": false, 00:16:40.974 "data_offset": 256, 00:16:40.974 "data_size": 7936 00:16:40.974 }, 00:16:40.974 { 00:16:40.974 "name": "pt2", 00:16:40.974 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:40.974 "is_configured": true, 00:16:40.974 "data_offset": 256, 00:16:40.974 "data_size": 7936 00:16:40.974 } 00:16:40.974 ] 00:16:40.974 }' 00:16:40.974 03:25:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.974 03:25:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.234 03:25:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:41.234 03:25:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.234 03:25:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.234 03:25:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:41.234 03:25:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.234 03:25:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:41.234 03:25:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:41.234 03:25:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:41.234 03:25:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.234 03:25:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.234 [2024-11-21 03:25:28.768043] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:41.234 03:25:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.493 03:25:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 189a1252-0b41-4869-a737-f4d7134456c8 '!=' 189a1252-0b41-4869-a737-f4d7134456c8 ']' 00:16:41.493 03:25:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 98592 00:16:41.493 03:25:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 98592 ']' 00:16:41.493 03:25:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 98592 00:16:41.493 03:25:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:16:41.493 03:25:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:41.493 03:25:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98592 00:16:41.493 03:25:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:41.493 03:25:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:41.493 killing process with pid 98592 00:16:41.493 03:25:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98592' 00:16:41.493 03:25:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 98592 00:16:41.493 [2024-11-21 03:25:28.845338] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:41.493 [2024-11-21 03:25:28.845417] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:41.493 [2024-11-21 03:25:28.845455] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:41.493 [2024-11-21 03:25:28.845467] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:41.493 03:25:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 98592 00:16:41.493 [2024-11-21 03:25:28.886666] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:41.755 03:25:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:16:41.755 00:16:41.755 real 0m4.861s 00:16:41.755 user 0m7.668s 00:16:41.755 sys 0m1.148s 00:16:41.755 03:25:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:41.755 ************************************ 00:16:41.755 END TEST raid_superblock_test_4k 00:16:41.755 ************************************ 00:16:41.755 03:25:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.755 03:25:29 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:16:41.755 03:25:29 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:16:41.755 03:25:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:41.755 03:25:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:41.755 03:25:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:41.755 ************************************ 00:16:41.755 START TEST raid_rebuild_test_sb_4k 00:16:41.755 ************************************ 00:16:41.755 03:25:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:16:41.755 03:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:41.755 03:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:41.755 03:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:41.755 03:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:41.755 03:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:41.755 03:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:41.755 03:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:41.755 03:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:41.755 03:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:41.755 03:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:41.755 03:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:41.755 03:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:41.755 03:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:41.755 03:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:41.755 03:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:41.755 03:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:41.755 03:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:41.755 03:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:41.755 03:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:41.755 03:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:41.755 03:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:41.755 03:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:41.755 03:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:42.016 03:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:42.016 03:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=98904 00:16:42.016 03:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:42.016 03:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 98904 00:16:42.016 03:25:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 98904 ']' 00:16:42.017 03:25:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.017 03:25:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:42.017 03:25:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.017 03:25:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:42.017 03:25:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.017 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:42.017 Zero copy mechanism will not be used. 00:16:42.017 [2024-11-21 03:25:29.405231] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:16:42.017 [2024-11-21 03:25:29.405372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98904 ] 00:16:42.017 [2024-11-21 03:25:29.540722] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:42.017 [2024-11-21 03:25:29.576275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.276 [2024-11-21 03:25:29.615696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.276 [2024-11-21 03:25:29.693430] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:42.276 [2024-11-21 03:25:29.693559] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:42.846 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:42.846 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:16:42.846 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:42.846 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:16:42.846 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.846 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.846 BaseBdev1_malloc 00:16:42.846 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.846 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:42.846 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.846 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.846 [2024-11-21 03:25:30.258986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:42.846 [2024-11-21 03:25:30.259073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.846 [2024-11-21 03:25:30.259106] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:42.846 [2024-11-21 03:25:30.259122] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.846 [2024-11-21 03:25:30.261633] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.846 [2024-11-21 03:25:30.261679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:42.846 BaseBdev1 00:16:42.846 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.846 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:42.846 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:16:42.846 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.846 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.846 BaseBdev2_malloc 00:16:42.846 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.846 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:42.846 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.846 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.846 [2024-11-21 03:25:30.294059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:42.846 [2024-11-21 03:25:30.294166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.846 [2024-11-21 03:25:30.294190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:42.846 [2024-11-21 03:25:30.294203] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.846 [2024-11-21 03:25:30.296578] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.846 [2024-11-21 03:25:30.296622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:42.846 BaseBdev2 00:16:42.846 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.846 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:16:42.846 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.846 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.846 spare_malloc 00:16:42.846 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.846 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:42.846 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.846 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.846 spare_delay 00:16:42.846 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.846 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:42.846 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.847 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.847 [2024-11-21 03:25:30.341035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:42.847 [2024-11-21 03:25:30.341092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.847 [2024-11-21 03:25:30.341114] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:42.847 [2024-11-21 03:25:30.341129] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.847 [2024-11-21 03:25:30.343436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.847 [2024-11-21 03:25:30.343477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:42.847 spare 00:16:42.847 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.847 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:42.847 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.847 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.847 [2024-11-21 03:25:30.353135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.847 [2024-11-21 03:25:30.355177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:42.847 [2024-11-21 03:25:30.355350] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:42.847 [2024-11-21 03:25:30.355366] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:42.847 [2024-11-21 03:25:30.355633] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:42.847 [2024-11-21 03:25:30.355788] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:42.847 [2024-11-21 03:25:30.355798] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:42.847 [2024-11-21 03:25:30.355908] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.847 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.847 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:42.847 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.847 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.847 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.847 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.847 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:42.847 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.847 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.847 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.847 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.847 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.847 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.847 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.847 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.847 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.107 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.107 "name": "raid_bdev1", 00:16:43.107 "uuid": "723cdc40-8b00-4ceb-9dde-e76f0b94f827", 00:16:43.107 "strip_size_kb": 0, 00:16:43.107 "state": "online", 00:16:43.107 "raid_level": "raid1", 00:16:43.107 "superblock": true, 00:16:43.107 "num_base_bdevs": 2, 00:16:43.107 "num_base_bdevs_discovered": 2, 00:16:43.107 "num_base_bdevs_operational": 2, 00:16:43.107 "base_bdevs_list": [ 00:16:43.107 { 00:16:43.107 "name": "BaseBdev1", 00:16:43.107 "uuid": "3067037b-aa83-5dd9-91ef-4d6c91fc795f", 00:16:43.107 "is_configured": true, 00:16:43.107 "data_offset": 256, 00:16:43.107 "data_size": 7936 00:16:43.107 }, 00:16:43.107 { 00:16:43.107 "name": "BaseBdev2", 00:16:43.107 "uuid": "64393f03-cd08-55d0-a133-f2bf9d933808", 00:16:43.107 "is_configured": true, 00:16:43.107 "data_offset": 256, 00:16:43.107 "data_size": 7936 00:16:43.107 } 00:16:43.107 ] 00:16:43.107 }' 00:16:43.107 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.107 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.368 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:43.368 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:43.368 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.368 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.368 [2024-11-21 03:25:30.817469] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:43.368 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.368 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:43.368 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.368 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.368 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.368 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:43.368 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.368 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:43.368 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:43.368 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:43.368 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:43.368 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:43.368 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:43.368 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:43.368 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:43.368 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:43.368 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:43.368 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:16:43.368 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:43.368 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:43.368 03:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:43.628 [2024-11-21 03:25:31.089391] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:43.628 /dev/nbd0 00:16:43.628 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:43.628 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:43.628 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:43.628 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:16:43.628 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:43.628 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:43.628 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:43.628 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:16:43.628 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:43.628 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:43.628 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:43.628 1+0 records in 00:16:43.628 1+0 records out 00:16:43.628 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000860935 s, 4.8 MB/s 00:16:43.628 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:43.628 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:16:43.628 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:43.628 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:43.628 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:16:43.628 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:43.628 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:43.628 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:43.628 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:43.628 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:16:44.199 7936+0 records in 00:16:44.199 7936+0 records out 00:16:44.199 32505856 bytes (33 MB, 31 MiB) copied, 0.518715 s, 62.7 MB/s 00:16:44.199 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:44.199 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:44.199 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:44.199 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:44.199 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:16:44.199 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:44.199 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:44.460 [2024-11-21 03:25:31.900152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.460 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:44.460 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:44.460 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:44.460 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:44.460 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:44.460 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:44.460 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:44.460 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:44.460 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:44.460 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.460 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.460 [2024-11-21 03:25:31.931063] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:44.460 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.460 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:44.460 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.460 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.460 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:44.460 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:44.460 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:44.460 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.460 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.460 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.460 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.460 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.460 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.460 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.460 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.460 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.460 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.460 "name": "raid_bdev1", 00:16:44.460 "uuid": "723cdc40-8b00-4ceb-9dde-e76f0b94f827", 00:16:44.460 "strip_size_kb": 0, 00:16:44.460 "state": "online", 00:16:44.460 "raid_level": "raid1", 00:16:44.460 "superblock": true, 00:16:44.460 "num_base_bdevs": 2, 00:16:44.460 "num_base_bdevs_discovered": 1, 00:16:44.460 "num_base_bdevs_operational": 1, 00:16:44.460 "base_bdevs_list": [ 00:16:44.460 { 00:16:44.460 "name": null, 00:16:44.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.460 "is_configured": false, 00:16:44.460 "data_offset": 0, 00:16:44.460 "data_size": 7936 00:16:44.460 }, 00:16:44.460 { 00:16:44.460 "name": "BaseBdev2", 00:16:44.460 "uuid": "64393f03-cd08-55d0-a133-f2bf9d933808", 00:16:44.460 "is_configured": true, 00:16:44.460 "data_offset": 256, 00:16:44.460 "data_size": 7936 00:16:44.460 } 00:16:44.460 ] 00:16:44.460 }' 00:16:44.460 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.460 03:25:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:45.031 03:25:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:45.031 03:25:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.031 03:25:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:45.031 [2024-11-21 03:25:32.347153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:45.031 [2024-11-21 03:25:32.364092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d670 00:16:45.031 03:25:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.031 03:25:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:45.031 [2024-11-21 03:25:32.370918] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:45.970 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:45.970 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.970 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:45.970 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:45.970 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.970 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.970 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.970 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.970 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:45.970 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.970 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.970 "name": "raid_bdev1", 00:16:45.971 "uuid": "723cdc40-8b00-4ceb-9dde-e76f0b94f827", 00:16:45.971 "strip_size_kb": 0, 00:16:45.971 "state": "online", 00:16:45.971 "raid_level": "raid1", 00:16:45.971 "superblock": true, 00:16:45.971 "num_base_bdevs": 2, 00:16:45.971 "num_base_bdevs_discovered": 2, 00:16:45.971 "num_base_bdevs_operational": 2, 00:16:45.971 "process": { 00:16:45.971 "type": "rebuild", 00:16:45.971 "target": "spare", 00:16:45.971 "progress": { 00:16:45.971 "blocks": 2560, 00:16:45.971 "percent": 32 00:16:45.971 } 00:16:45.971 }, 00:16:45.971 "base_bdevs_list": [ 00:16:45.971 { 00:16:45.971 "name": "spare", 00:16:45.971 "uuid": "d1147491-b38b-5344-b737-ffbf6fac9f6f", 00:16:45.971 "is_configured": true, 00:16:45.971 "data_offset": 256, 00:16:45.971 "data_size": 7936 00:16:45.971 }, 00:16:45.971 { 00:16:45.971 "name": "BaseBdev2", 00:16:45.971 "uuid": "64393f03-cd08-55d0-a133-f2bf9d933808", 00:16:45.971 "is_configured": true, 00:16:45.971 "data_offset": 256, 00:16:45.971 "data_size": 7936 00:16:45.971 } 00:16:45.971 ] 00:16:45.971 }' 00:16:45.971 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.971 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:45.971 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.971 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:45.971 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:45.971 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.971 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:45.971 [2024-11-21 03:25:33.530076] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:46.231 [2024-11-21 03:25:33.578053] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:46.231 [2024-11-21 03:25:33.578125] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.231 [2024-11-21 03:25:33.578140] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:46.231 [2024-11-21 03:25:33.578149] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:46.231 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.231 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:46.231 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.231 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.231 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:46.231 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:46.231 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:46.231 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.231 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.231 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.231 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.231 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.231 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.231 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:46.231 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.231 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.231 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.231 "name": "raid_bdev1", 00:16:46.231 "uuid": "723cdc40-8b00-4ceb-9dde-e76f0b94f827", 00:16:46.231 "strip_size_kb": 0, 00:16:46.231 "state": "online", 00:16:46.231 "raid_level": "raid1", 00:16:46.231 "superblock": true, 00:16:46.231 "num_base_bdevs": 2, 00:16:46.231 "num_base_bdevs_discovered": 1, 00:16:46.231 "num_base_bdevs_operational": 1, 00:16:46.231 "base_bdevs_list": [ 00:16:46.231 { 00:16:46.231 "name": null, 00:16:46.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.231 "is_configured": false, 00:16:46.231 "data_offset": 0, 00:16:46.231 "data_size": 7936 00:16:46.231 }, 00:16:46.231 { 00:16:46.231 "name": "BaseBdev2", 00:16:46.231 "uuid": "64393f03-cd08-55d0-a133-f2bf9d933808", 00:16:46.231 "is_configured": true, 00:16:46.231 "data_offset": 256, 00:16:46.231 "data_size": 7936 00:16:46.231 } 00:16:46.231 ] 00:16:46.231 }' 00:16:46.231 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.231 03:25:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:46.491 03:25:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:46.491 03:25:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.491 03:25:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:46.491 03:25:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:46.491 03:25:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.491 03:25:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.491 03:25:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.491 03:25:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.491 03:25:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:46.491 03:25:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.491 03:25:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.491 "name": "raid_bdev1", 00:16:46.491 "uuid": "723cdc40-8b00-4ceb-9dde-e76f0b94f827", 00:16:46.491 "strip_size_kb": 0, 00:16:46.491 "state": "online", 00:16:46.491 "raid_level": "raid1", 00:16:46.491 "superblock": true, 00:16:46.491 "num_base_bdevs": 2, 00:16:46.491 "num_base_bdevs_discovered": 1, 00:16:46.491 "num_base_bdevs_operational": 1, 00:16:46.491 "base_bdevs_list": [ 00:16:46.491 { 00:16:46.491 "name": null, 00:16:46.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.491 "is_configured": false, 00:16:46.491 "data_offset": 0, 00:16:46.491 "data_size": 7936 00:16:46.491 }, 00:16:46.491 { 00:16:46.491 "name": "BaseBdev2", 00:16:46.491 "uuid": "64393f03-cd08-55d0-a133-f2bf9d933808", 00:16:46.491 "is_configured": true, 00:16:46.491 "data_offset": 256, 00:16:46.491 "data_size": 7936 00:16:46.491 } 00:16:46.491 ] 00:16:46.491 }' 00:16:46.491 03:25:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.751 03:25:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:46.751 03:25:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.751 03:25:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:46.751 03:25:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:46.751 03:25:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.751 03:25:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:46.751 [2024-11-21 03:25:34.123165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:46.751 [2024-11-21 03:25:34.128063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d740 00:16:46.751 03:25:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.751 03:25:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:46.751 [2024-11-21 03:25:34.129954] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:47.691 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:47.691 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.691 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:47.691 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:47.691 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.691 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.691 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.691 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.691 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.691 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.691 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.691 "name": "raid_bdev1", 00:16:47.691 "uuid": "723cdc40-8b00-4ceb-9dde-e76f0b94f827", 00:16:47.691 "strip_size_kb": 0, 00:16:47.691 "state": "online", 00:16:47.691 "raid_level": "raid1", 00:16:47.691 "superblock": true, 00:16:47.691 "num_base_bdevs": 2, 00:16:47.691 "num_base_bdevs_discovered": 2, 00:16:47.691 "num_base_bdevs_operational": 2, 00:16:47.691 "process": { 00:16:47.691 "type": "rebuild", 00:16:47.691 "target": "spare", 00:16:47.691 "progress": { 00:16:47.691 "blocks": 2560, 00:16:47.691 "percent": 32 00:16:47.691 } 00:16:47.691 }, 00:16:47.691 "base_bdevs_list": [ 00:16:47.691 { 00:16:47.691 "name": "spare", 00:16:47.691 "uuid": "d1147491-b38b-5344-b737-ffbf6fac9f6f", 00:16:47.691 "is_configured": true, 00:16:47.691 "data_offset": 256, 00:16:47.691 "data_size": 7936 00:16:47.691 }, 00:16:47.691 { 00:16:47.691 "name": "BaseBdev2", 00:16:47.691 "uuid": "64393f03-cd08-55d0-a133-f2bf9d933808", 00:16:47.691 "is_configured": true, 00:16:47.691 "data_offset": 256, 00:16:47.691 "data_size": 7936 00:16:47.691 } 00:16:47.691 ] 00:16:47.691 }' 00:16:47.691 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.691 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:47.691 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.950 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:47.950 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:47.950 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:47.950 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:47.951 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:47.951 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:47.951 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:47.951 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=570 00:16:47.951 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:47.951 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:47.951 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.951 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:47.951 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:47.951 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.951 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.951 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.951 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.951 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.951 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.951 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.951 "name": "raid_bdev1", 00:16:47.951 "uuid": "723cdc40-8b00-4ceb-9dde-e76f0b94f827", 00:16:47.951 "strip_size_kb": 0, 00:16:47.951 "state": "online", 00:16:47.951 "raid_level": "raid1", 00:16:47.951 "superblock": true, 00:16:47.951 "num_base_bdevs": 2, 00:16:47.951 "num_base_bdevs_discovered": 2, 00:16:47.951 "num_base_bdevs_operational": 2, 00:16:47.951 "process": { 00:16:47.951 "type": "rebuild", 00:16:47.951 "target": "spare", 00:16:47.951 "progress": { 00:16:47.951 "blocks": 2816, 00:16:47.951 "percent": 35 00:16:47.951 } 00:16:47.951 }, 00:16:47.951 "base_bdevs_list": [ 00:16:47.951 { 00:16:47.951 "name": "spare", 00:16:47.951 "uuid": "d1147491-b38b-5344-b737-ffbf6fac9f6f", 00:16:47.951 "is_configured": true, 00:16:47.951 "data_offset": 256, 00:16:47.951 "data_size": 7936 00:16:47.951 }, 00:16:47.951 { 00:16:47.951 "name": "BaseBdev2", 00:16:47.951 "uuid": "64393f03-cd08-55d0-a133-f2bf9d933808", 00:16:47.951 "is_configured": true, 00:16:47.951 "data_offset": 256, 00:16:47.951 "data_size": 7936 00:16:47.951 } 00:16:47.951 ] 00:16:47.951 }' 00:16:47.951 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.951 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:47.951 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.951 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:47.951 03:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:48.889 03:25:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:48.889 03:25:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:48.889 03:25:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.889 03:25:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:48.889 03:25:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:48.889 03:25:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.889 03:25:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.889 03:25:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.889 03:25:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.889 03:25:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.889 03:25:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.148 03:25:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.148 "name": "raid_bdev1", 00:16:49.148 "uuid": "723cdc40-8b00-4ceb-9dde-e76f0b94f827", 00:16:49.148 "strip_size_kb": 0, 00:16:49.148 "state": "online", 00:16:49.148 "raid_level": "raid1", 00:16:49.148 "superblock": true, 00:16:49.148 "num_base_bdevs": 2, 00:16:49.148 "num_base_bdevs_discovered": 2, 00:16:49.148 "num_base_bdevs_operational": 2, 00:16:49.148 "process": { 00:16:49.148 "type": "rebuild", 00:16:49.148 "target": "spare", 00:16:49.148 "progress": { 00:16:49.148 "blocks": 5632, 00:16:49.148 "percent": 70 00:16:49.148 } 00:16:49.148 }, 00:16:49.148 "base_bdevs_list": [ 00:16:49.148 { 00:16:49.148 "name": "spare", 00:16:49.148 "uuid": "d1147491-b38b-5344-b737-ffbf6fac9f6f", 00:16:49.148 "is_configured": true, 00:16:49.148 "data_offset": 256, 00:16:49.148 "data_size": 7936 00:16:49.148 }, 00:16:49.148 { 00:16:49.148 "name": "BaseBdev2", 00:16:49.148 "uuid": "64393f03-cd08-55d0-a133-f2bf9d933808", 00:16:49.148 "is_configured": true, 00:16:49.148 "data_offset": 256, 00:16:49.148 "data_size": 7936 00:16:49.148 } 00:16:49.148 ] 00:16:49.148 }' 00:16:49.148 03:25:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.148 03:25:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:49.148 03:25:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.148 03:25:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:49.148 03:25:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:49.717 [2024-11-21 03:25:37.245655] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:49.717 [2024-11-21 03:25:37.245730] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:49.717 [2024-11-21 03:25:37.245837] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.285 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:50.285 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:50.285 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.285 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:50.285 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:50.285 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.285 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.285 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.285 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.285 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.285 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.285 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.285 "name": "raid_bdev1", 00:16:50.285 "uuid": "723cdc40-8b00-4ceb-9dde-e76f0b94f827", 00:16:50.285 "strip_size_kb": 0, 00:16:50.285 "state": "online", 00:16:50.285 "raid_level": "raid1", 00:16:50.285 "superblock": true, 00:16:50.285 "num_base_bdevs": 2, 00:16:50.285 "num_base_bdevs_discovered": 2, 00:16:50.285 "num_base_bdevs_operational": 2, 00:16:50.285 "base_bdevs_list": [ 00:16:50.285 { 00:16:50.285 "name": "spare", 00:16:50.285 "uuid": "d1147491-b38b-5344-b737-ffbf6fac9f6f", 00:16:50.285 "is_configured": true, 00:16:50.285 "data_offset": 256, 00:16:50.285 "data_size": 7936 00:16:50.285 }, 00:16:50.285 { 00:16:50.285 "name": "BaseBdev2", 00:16:50.285 "uuid": "64393f03-cd08-55d0-a133-f2bf9d933808", 00:16:50.285 "is_configured": true, 00:16:50.285 "data_offset": 256, 00:16:50.285 "data_size": 7936 00:16:50.285 } 00:16:50.285 ] 00:16:50.285 }' 00:16:50.285 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.285 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:50.285 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.285 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:50.285 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:16:50.285 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:50.285 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.285 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:50.285 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:50.285 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.285 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.285 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.285 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.285 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.285 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.285 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.285 "name": "raid_bdev1", 00:16:50.285 "uuid": "723cdc40-8b00-4ceb-9dde-e76f0b94f827", 00:16:50.285 "strip_size_kb": 0, 00:16:50.285 "state": "online", 00:16:50.285 "raid_level": "raid1", 00:16:50.285 "superblock": true, 00:16:50.285 "num_base_bdevs": 2, 00:16:50.285 "num_base_bdevs_discovered": 2, 00:16:50.285 "num_base_bdevs_operational": 2, 00:16:50.285 "base_bdevs_list": [ 00:16:50.285 { 00:16:50.285 "name": "spare", 00:16:50.285 "uuid": "d1147491-b38b-5344-b737-ffbf6fac9f6f", 00:16:50.285 "is_configured": true, 00:16:50.285 "data_offset": 256, 00:16:50.285 "data_size": 7936 00:16:50.285 }, 00:16:50.285 { 00:16:50.285 "name": "BaseBdev2", 00:16:50.285 "uuid": "64393f03-cd08-55d0-a133-f2bf9d933808", 00:16:50.285 "is_configured": true, 00:16:50.285 "data_offset": 256, 00:16:50.285 "data_size": 7936 00:16:50.285 } 00:16:50.285 ] 00:16:50.285 }' 00:16:50.285 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.285 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:50.285 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.544 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:50.544 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:50.544 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.544 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.544 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.544 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.544 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:50.544 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.544 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.544 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.544 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.544 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.544 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.544 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.544 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.544 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.544 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.544 "name": "raid_bdev1", 00:16:50.544 "uuid": "723cdc40-8b00-4ceb-9dde-e76f0b94f827", 00:16:50.544 "strip_size_kb": 0, 00:16:50.544 "state": "online", 00:16:50.544 "raid_level": "raid1", 00:16:50.544 "superblock": true, 00:16:50.544 "num_base_bdevs": 2, 00:16:50.544 "num_base_bdevs_discovered": 2, 00:16:50.544 "num_base_bdevs_operational": 2, 00:16:50.544 "base_bdevs_list": [ 00:16:50.544 { 00:16:50.544 "name": "spare", 00:16:50.544 "uuid": "d1147491-b38b-5344-b737-ffbf6fac9f6f", 00:16:50.544 "is_configured": true, 00:16:50.544 "data_offset": 256, 00:16:50.544 "data_size": 7936 00:16:50.544 }, 00:16:50.544 { 00:16:50.544 "name": "BaseBdev2", 00:16:50.544 "uuid": "64393f03-cd08-55d0-a133-f2bf9d933808", 00:16:50.544 "is_configured": true, 00:16:50.544 "data_offset": 256, 00:16:50.544 "data_size": 7936 00:16:50.544 } 00:16:50.544 ] 00:16:50.544 }' 00:16:50.545 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.545 03:25:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.804 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:50.804 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.804 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.804 [2024-11-21 03:25:38.302383] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:50.804 [2024-11-21 03:25:38.302420] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:50.804 [2024-11-21 03:25:38.302505] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:50.804 [2024-11-21 03:25:38.302574] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:50.804 [2024-11-21 03:25:38.302592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:50.804 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.804 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:16:50.804 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.804 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.804 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.804 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.804 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:50.804 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:50.804 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:50.804 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:50.804 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:50.804 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:50.804 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:50.804 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:50.804 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:50.804 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:16:50.804 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:50.804 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:50.804 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:51.063 /dev/nbd0 00:16:51.063 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:51.063 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:51.063 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:51.063 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:16:51.063 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:51.063 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:51.063 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:51.063 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:16:51.063 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:51.063 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:51.063 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:51.063 1+0 records in 00:16:51.063 1+0 records out 00:16:51.063 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404323 s, 10.1 MB/s 00:16:51.063 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:51.063 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:16:51.063 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:51.063 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:51.063 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:16:51.063 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:51.063 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:51.063 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:51.323 /dev/nbd1 00:16:51.323 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:51.323 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:51.323 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:51.323 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:16:51.323 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:51.323 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:51.323 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:51.323 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:16:51.323 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:51.323 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:51.323 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:51.323 1+0 records in 00:16:51.323 1+0 records out 00:16:51.323 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443422 s, 9.2 MB/s 00:16:51.323 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:51.323 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:16:51.323 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:51.323 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:51.323 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:16:51.323 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:51.323 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:51.323 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:51.582 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:51.582 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:51.582 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:51.582 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:51.583 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:16:51.583 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:51.583 03:25:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:51.583 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:51.583 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:51.583 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:51.583 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:51.583 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:51.583 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:51.583 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:51.583 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:51.583 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:51.583 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:51.842 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:51.842 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:51.842 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:51.842 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:51.842 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:51.842 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:51.842 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:51.842 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:51.842 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:51.842 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:51.842 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.842 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.842 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.842 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:51.842 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.842 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.842 [2024-11-21 03:25:39.371640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:51.842 [2024-11-21 03:25:39.371696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.842 [2024-11-21 03:25:39.371719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:51.842 [2024-11-21 03:25:39.371728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.842 [2024-11-21 03:25:39.373853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.842 [2024-11-21 03:25:39.373892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:51.842 [2024-11-21 03:25:39.373969] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:51.842 [2024-11-21 03:25:39.374026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:51.842 [2024-11-21 03:25:39.374152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:51.842 spare 00:16:51.842 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.842 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:51.842 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.842 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.101 [2024-11-21 03:25:39.474225] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:52.101 [2024-11-21 03:25:39.474254] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:52.101 [2024-11-21 03:25:39.474517] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1f60 00:16:52.102 [2024-11-21 03:25:39.474673] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:52.102 [2024-11-21 03:25:39.474690] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:52.102 [2024-11-21 03:25:39.474807] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:52.102 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.102 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:52.102 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.102 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.102 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.102 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.102 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:52.102 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.102 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.102 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.102 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.102 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.102 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.102 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.102 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.102 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.102 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.102 "name": "raid_bdev1", 00:16:52.102 "uuid": "723cdc40-8b00-4ceb-9dde-e76f0b94f827", 00:16:52.102 "strip_size_kb": 0, 00:16:52.102 "state": "online", 00:16:52.102 "raid_level": "raid1", 00:16:52.102 "superblock": true, 00:16:52.102 "num_base_bdevs": 2, 00:16:52.102 "num_base_bdevs_discovered": 2, 00:16:52.102 "num_base_bdevs_operational": 2, 00:16:52.102 "base_bdevs_list": [ 00:16:52.102 { 00:16:52.102 "name": "spare", 00:16:52.102 "uuid": "d1147491-b38b-5344-b737-ffbf6fac9f6f", 00:16:52.102 "is_configured": true, 00:16:52.102 "data_offset": 256, 00:16:52.102 "data_size": 7936 00:16:52.102 }, 00:16:52.102 { 00:16:52.102 "name": "BaseBdev2", 00:16:52.102 "uuid": "64393f03-cd08-55d0-a133-f2bf9d933808", 00:16:52.102 "is_configured": true, 00:16:52.102 "data_offset": 256, 00:16:52.102 "data_size": 7936 00:16:52.102 } 00:16:52.102 ] 00:16:52.102 }' 00:16:52.102 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.102 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.362 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:52.362 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.362 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:52.362 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:52.362 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.362 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.362 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.362 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.362 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.362 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.622 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.622 "name": "raid_bdev1", 00:16:52.622 "uuid": "723cdc40-8b00-4ceb-9dde-e76f0b94f827", 00:16:52.622 "strip_size_kb": 0, 00:16:52.622 "state": "online", 00:16:52.622 "raid_level": "raid1", 00:16:52.622 "superblock": true, 00:16:52.622 "num_base_bdevs": 2, 00:16:52.622 "num_base_bdevs_discovered": 2, 00:16:52.622 "num_base_bdevs_operational": 2, 00:16:52.622 "base_bdevs_list": [ 00:16:52.622 { 00:16:52.622 "name": "spare", 00:16:52.622 "uuid": "d1147491-b38b-5344-b737-ffbf6fac9f6f", 00:16:52.622 "is_configured": true, 00:16:52.622 "data_offset": 256, 00:16:52.622 "data_size": 7936 00:16:52.622 }, 00:16:52.622 { 00:16:52.622 "name": "BaseBdev2", 00:16:52.622 "uuid": "64393f03-cd08-55d0-a133-f2bf9d933808", 00:16:52.622 "is_configured": true, 00:16:52.622 "data_offset": 256, 00:16:52.622 "data_size": 7936 00:16:52.622 } 00:16:52.622 ] 00:16:52.622 }' 00:16:52.622 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.622 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:52.622 03:25:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.622 03:25:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:52.622 03:25:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.622 03:25:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:52.622 03:25:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.622 03:25:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.622 03:25:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.622 03:25:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:52.622 03:25:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:52.622 03:25:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.622 03:25:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.622 [2024-11-21 03:25:40.091853] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:52.622 03:25:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.622 03:25:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:52.622 03:25:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.622 03:25:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.622 03:25:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.622 03:25:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.622 03:25:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:52.622 03:25:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.622 03:25:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.622 03:25:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.622 03:25:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.622 03:25:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.622 03:25:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.622 03:25:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.622 03:25:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.622 03:25:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.622 03:25:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.622 "name": "raid_bdev1", 00:16:52.622 "uuid": "723cdc40-8b00-4ceb-9dde-e76f0b94f827", 00:16:52.622 "strip_size_kb": 0, 00:16:52.622 "state": "online", 00:16:52.622 "raid_level": "raid1", 00:16:52.622 "superblock": true, 00:16:52.622 "num_base_bdevs": 2, 00:16:52.622 "num_base_bdevs_discovered": 1, 00:16:52.622 "num_base_bdevs_operational": 1, 00:16:52.622 "base_bdevs_list": [ 00:16:52.622 { 00:16:52.622 "name": null, 00:16:52.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.622 "is_configured": false, 00:16:52.622 "data_offset": 0, 00:16:52.622 "data_size": 7936 00:16:52.622 }, 00:16:52.622 { 00:16:52.622 "name": "BaseBdev2", 00:16:52.622 "uuid": "64393f03-cd08-55d0-a133-f2bf9d933808", 00:16:52.622 "is_configured": true, 00:16:52.622 "data_offset": 256, 00:16:52.622 "data_size": 7936 00:16:52.622 } 00:16:52.623 ] 00:16:52.623 }' 00:16:52.623 03:25:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.623 03:25:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.193 03:25:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:53.193 03:25:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.193 03:25:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.193 [2024-11-21 03:25:40.556006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:53.193 [2024-11-21 03:25:40.556167] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:53.193 [2024-11-21 03:25:40.556194] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:53.193 [2024-11-21 03:25:40.556224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:53.193 [2024-11-21 03:25:40.561147] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c2030 00:16:53.193 03:25:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.193 03:25:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:53.193 [2024-11-21 03:25:40.562962] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:54.132 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:54.132 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.132 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:54.132 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:54.132 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.132 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.132 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.132 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.132 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.132 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.132 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.132 "name": "raid_bdev1", 00:16:54.132 "uuid": "723cdc40-8b00-4ceb-9dde-e76f0b94f827", 00:16:54.132 "strip_size_kb": 0, 00:16:54.132 "state": "online", 00:16:54.132 "raid_level": "raid1", 00:16:54.132 "superblock": true, 00:16:54.132 "num_base_bdevs": 2, 00:16:54.132 "num_base_bdevs_discovered": 2, 00:16:54.132 "num_base_bdevs_operational": 2, 00:16:54.132 "process": { 00:16:54.132 "type": "rebuild", 00:16:54.132 "target": "spare", 00:16:54.132 "progress": { 00:16:54.132 "blocks": 2560, 00:16:54.132 "percent": 32 00:16:54.132 } 00:16:54.132 }, 00:16:54.132 "base_bdevs_list": [ 00:16:54.132 { 00:16:54.132 "name": "spare", 00:16:54.132 "uuid": "d1147491-b38b-5344-b737-ffbf6fac9f6f", 00:16:54.132 "is_configured": true, 00:16:54.132 "data_offset": 256, 00:16:54.132 "data_size": 7936 00:16:54.132 }, 00:16:54.132 { 00:16:54.132 "name": "BaseBdev2", 00:16:54.132 "uuid": "64393f03-cd08-55d0-a133-f2bf9d933808", 00:16:54.132 "is_configured": true, 00:16:54.132 "data_offset": 256, 00:16:54.132 "data_size": 7936 00:16:54.132 } 00:16:54.132 ] 00:16:54.132 }' 00:16:54.132 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.132 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:54.132 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.392 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:54.392 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:54.392 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.392 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.392 [2024-11-21 03:25:41.713109] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:54.392 [2024-11-21 03:25:41.769047] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:54.392 [2024-11-21 03:25:41.769099] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.392 [2024-11-21 03:25:41.769113] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:54.392 [2024-11-21 03:25:41.769121] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:54.392 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.392 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:54.392 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.392 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.392 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.392 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.392 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:54.392 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.392 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.392 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.392 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.392 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.392 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.392 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.392 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.392 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.392 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.392 "name": "raid_bdev1", 00:16:54.392 "uuid": "723cdc40-8b00-4ceb-9dde-e76f0b94f827", 00:16:54.392 "strip_size_kb": 0, 00:16:54.392 "state": "online", 00:16:54.392 "raid_level": "raid1", 00:16:54.392 "superblock": true, 00:16:54.392 "num_base_bdevs": 2, 00:16:54.392 "num_base_bdevs_discovered": 1, 00:16:54.392 "num_base_bdevs_operational": 1, 00:16:54.392 "base_bdevs_list": [ 00:16:54.392 { 00:16:54.392 "name": null, 00:16:54.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.392 "is_configured": false, 00:16:54.392 "data_offset": 0, 00:16:54.392 "data_size": 7936 00:16:54.392 }, 00:16:54.392 { 00:16:54.392 "name": "BaseBdev2", 00:16:54.392 "uuid": "64393f03-cd08-55d0-a133-f2bf9d933808", 00:16:54.392 "is_configured": true, 00:16:54.392 "data_offset": 256, 00:16:54.392 "data_size": 7936 00:16:54.392 } 00:16:54.393 ] 00:16:54.393 }' 00:16:54.393 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.393 03:25:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.963 03:25:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:54.963 03:25:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.963 03:25:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.963 [2024-11-21 03:25:42.233355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:54.963 [2024-11-21 03:25:42.233411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.963 [2024-11-21 03:25:42.233431] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:54.963 [2024-11-21 03:25:42.233442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.963 [2024-11-21 03:25:42.233844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.963 [2024-11-21 03:25:42.233873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:54.963 [2024-11-21 03:25:42.233949] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:54.963 [2024-11-21 03:25:42.233973] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:54.963 [2024-11-21 03:25:42.233993] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:54.963 [2024-11-21 03:25:42.234029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:54.963 [2024-11-21 03:25:42.238582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c2100 00:16:54.963 spare 00:16:54.963 03:25:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.963 03:25:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:54.963 [2024-11-21 03:25:42.240441] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:55.951 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.951 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.951 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.951 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.951 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.951 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.951 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.951 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.951 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.951 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.951 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.951 "name": "raid_bdev1", 00:16:55.951 "uuid": "723cdc40-8b00-4ceb-9dde-e76f0b94f827", 00:16:55.951 "strip_size_kb": 0, 00:16:55.951 "state": "online", 00:16:55.951 "raid_level": "raid1", 00:16:55.951 "superblock": true, 00:16:55.951 "num_base_bdevs": 2, 00:16:55.951 "num_base_bdevs_discovered": 2, 00:16:55.951 "num_base_bdevs_operational": 2, 00:16:55.951 "process": { 00:16:55.951 "type": "rebuild", 00:16:55.951 "target": "spare", 00:16:55.951 "progress": { 00:16:55.951 "blocks": 2560, 00:16:55.951 "percent": 32 00:16:55.951 } 00:16:55.951 }, 00:16:55.951 "base_bdevs_list": [ 00:16:55.951 { 00:16:55.951 "name": "spare", 00:16:55.951 "uuid": "d1147491-b38b-5344-b737-ffbf6fac9f6f", 00:16:55.951 "is_configured": true, 00:16:55.951 "data_offset": 256, 00:16:55.951 "data_size": 7936 00:16:55.951 }, 00:16:55.951 { 00:16:55.952 "name": "BaseBdev2", 00:16:55.952 "uuid": "64393f03-cd08-55d0-a133-f2bf9d933808", 00:16:55.952 "is_configured": true, 00:16:55.952 "data_offset": 256, 00:16:55.952 "data_size": 7936 00:16:55.952 } 00:16:55.952 ] 00:16:55.952 }' 00:16:55.952 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.952 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:55.952 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.952 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.952 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:55.952 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.952 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.952 [2024-11-21 03:25:43.378557] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:55.952 [2024-11-21 03:25:43.446510] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:55.952 [2024-11-21 03:25:43.446560] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.952 [2024-11-21 03:25:43.446576] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:55.952 [2024-11-21 03:25:43.446582] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:55.952 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.952 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:55.952 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.952 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.952 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:55.952 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:55.952 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:55.952 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.952 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.952 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.952 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.952 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.952 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.952 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.952 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.952 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.952 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.952 "name": "raid_bdev1", 00:16:55.952 "uuid": "723cdc40-8b00-4ceb-9dde-e76f0b94f827", 00:16:55.952 "strip_size_kb": 0, 00:16:55.952 "state": "online", 00:16:55.952 "raid_level": "raid1", 00:16:55.952 "superblock": true, 00:16:55.952 "num_base_bdevs": 2, 00:16:55.952 "num_base_bdevs_discovered": 1, 00:16:55.952 "num_base_bdevs_operational": 1, 00:16:55.952 "base_bdevs_list": [ 00:16:55.952 { 00:16:55.952 "name": null, 00:16:55.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.952 "is_configured": false, 00:16:55.952 "data_offset": 0, 00:16:55.952 "data_size": 7936 00:16:55.952 }, 00:16:55.952 { 00:16:55.952 "name": "BaseBdev2", 00:16:55.952 "uuid": "64393f03-cd08-55d0-a133-f2bf9d933808", 00:16:55.952 "is_configured": true, 00:16:55.952 "data_offset": 256, 00:16:55.952 "data_size": 7936 00:16:55.952 } 00:16:55.952 ] 00:16:55.952 }' 00:16:55.952 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.952 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.522 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:56.522 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.522 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:56.522 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:56.522 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.522 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.522 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.522 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.522 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.522 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.522 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:56.522 "name": "raid_bdev1", 00:16:56.522 "uuid": "723cdc40-8b00-4ceb-9dde-e76f0b94f827", 00:16:56.522 "strip_size_kb": 0, 00:16:56.522 "state": "online", 00:16:56.522 "raid_level": "raid1", 00:16:56.522 "superblock": true, 00:16:56.522 "num_base_bdevs": 2, 00:16:56.522 "num_base_bdevs_discovered": 1, 00:16:56.522 "num_base_bdevs_operational": 1, 00:16:56.522 "base_bdevs_list": [ 00:16:56.522 { 00:16:56.522 "name": null, 00:16:56.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.522 "is_configured": false, 00:16:56.522 "data_offset": 0, 00:16:56.522 "data_size": 7936 00:16:56.522 }, 00:16:56.522 { 00:16:56.522 "name": "BaseBdev2", 00:16:56.522 "uuid": "64393f03-cd08-55d0-a133-f2bf9d933808", 00:16:56.522 "is_configured": true, 00:16:56.522 "data_offset": 256, 00:16:56.522 "data_size": 7936 00:16:56.522 } 00:16:56.522 ] 00:16:56.522 }' 00:16:56.522 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:56.522 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:56.522 03:25:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:56.522 03:25:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:56.522 03:25:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:56.522 03:25:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.522 03:25:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.522 03:25:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.522 03:25:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:56.522 03:25:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.522 03:25:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.522 [2024-11-21 03:25:44.051181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:56.522 [2024-11-21 03:25:44.051231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.522 [2024-11-21 03:25:44.051251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:56.522 [2024-11-21 03:25:44.051260] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.522 [2024-11-21 03:25:44.051636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.522 [2024-11-21 03:25:44.051662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:56.522 [2024-11-21 03:25:44.051737] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:56.522 [2024-11-21 03:25:44.051758] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:56.522 [2024-11-21 03:25:44.051770] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:56.522 [2024-11-21 03:25:44.051780] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:56.522 BaseBdev1 00:16:56.522 03:25:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.522 03:25:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:57.903 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:57.903 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.903 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.903 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.903 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.903 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:57.903 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.903 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.903 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.903 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.903 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.903 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.903 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.903 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.903 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.903 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.903 "name": "raid_bdev1", 00:16:57.903 "uuid": "723cdc40-8b00-4ceb-9dde-e76f0b94f827", 00:16:57.903 "strip_size_kb": 0, 00:16:57.903 "state": "online", 00:16:57.903 "raid_level": "raid1", 00:16:57.903 "superblock": true, 00:16:57.903 "num_base_bdevs": 2, 00:16:57.903 "num_base_bdevs_discovered": 1, 00:16:57.903 "num_base_bdevs_operational": 1, 00:16:57.903 "base_bdevs_list": [ 00:16:57.903 { 00:16:57.903 "name": null, 00:16:57.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.903 "is_configured": false, 00:16:57.903 "data_offset": 0, 00:16:57.903 "data_size": 7936 00:16:57.903 }, 00:16:57.903 { 00:16:57.903 "name": "BaseBdev2", 00:16:57.903 "uuid": "64393f03-cd08-55d0-a133-f2bf9d933808", 00:16:57.903 "is_configured": true, 00:16:57.903 "data_offset": 256, 00:16:57.903 "data_size": 7936 00:16:57.903 } 00:16:57.903 ] 00:16:57.903 }' 00:16:57.903 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.903 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.163 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:58.163 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.163 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:58.163 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:58.163 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.163 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.163 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.163 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.163 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.163 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.163 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.163 "name": "raid_bdev1", 00:16:58.163 "uuid": "723cdc40-8b00-4ceb-9dde-e76f0b94f827", 00:16:58.163 "strip_size_kb": 0, 00:16:58.163 "state": "online", 00:16:58.163 "raid_level": "raid1", 00:16:58.163 "superblock": true, 00:16:58.163 "num_base_bdevs": 2, 00:16:58.163 "num_base_bdevs_discovered": 1, 00:16:58.163 "num_base_bdevs_operational": 1, 00:16:58.163 "base_bdevs_list": [ 00:16:58.163 { 00:16:58.163 "name": null, 00:16:58.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.163 "is_configured": false, 00:16:58.163 "data_offset": 0, 00:16:58.163 "data_size": 7936 00:16:58.163 }, 00:16:58.163 { 00:16:58.163 "name": "BaseBdev2", 00:16:58.163 "uuid": "64393f03-cd08-55d0-a133-f2bf9d933808", 00:16:58.163 "is_configured": true, 00:16:58.163 "data_offset": 256, 00:16:58.163 "data_size": 7936 00:16:58.163 } 00:16:58.163 ] 00:16:58.163 }' 00:16:58.163 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.163 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:58.163 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.163 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:58.163 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:58.163 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:16:58.164 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:58.164 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:58.164 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:58.164 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:58.164 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:58.164 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:58.164 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.164 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.164 [2024-11-21 03:25:45.699636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:58.164 [2024-11-21 03:25:45.699770] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:58.164 [2024-11-21 03:25:45.699785] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:58.164 request: 00:16:58.164 { 00:16:58.164 "base_bdev": "BaseBdev1", 00:16:58.164 "raid_bdev": "raid_bdev1", 00:16:58.164 "method": "bdev_raid_add_base_bdev", 00:16:58.164 "req_id": 1 00:16:58.164 } 00:16:58.164 Got JSON-RPC error response 00:16:58.164 response: 00:16:58.164 { 00:16:58.164 "code": -22, 00:16:58.164 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:58.164 } 00:16:58.164 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:58.164 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:16:58.164 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:58.164 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:58.164 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:58.164 03:25:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:59.544 03:25:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:59.544 03:25:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.544 03:25:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.544 03:25:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.544 03:25:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.544 03:25:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:59.544 03:25:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.544 03:25:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.544 03:25:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.544 03:25:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.544 03:25:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.544 03:25:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.544 03:25:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.544 03:25:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.544 03:25:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.544 03:25:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.544 "name": "raid_bdev1", 00:16:59.544 "uuid": "723cdc40-8b00-4ceb-9dde-e76f0b94f827", 00:16:59.544 "strip_size_kb": 0, 00:16:59.544 "state": "online", 00:16:59.544 "raid_level": "raid1", 00:16:59.544 "superblock": true, 00:16:59.544 "num_base_bdevs": 2, 00:16:59.544 "num_base_bdevs_discovered": 1, 00:16:59.544 "num_base_bdevs_operational": 1, 00:16:59.544 "base_bdevs_list": [ 00:16:59.544 { 00:16:59.544 "name": null, 00:16:59.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.544 "is_configured": false, 00:16:59.544 "data_offset": 0, 00:16:59.544 "data_size": 7936 00:16:59.544 }, 00:16:59.544 { 00:16:59.544 "name": "BaseBdev2", 00:16:59.544 "uuid": "64393f03-cd08-55d0-a133-f2bf9d933808", 00:16:59.545 "is_configured": true, 00:16:59.545 "data_offset": 256, 00:16:59.545 "data_size": 7936 00:16:59.545 } 00:16:59.545 ] 00:16:59.545 }' 00:16:59.545 03:25:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.545 03:25:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.805 03:25:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:59.805 03:25:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.805 03:25:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:59.805 03:25:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:59.805 03:25:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.805 03:25:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.805 03:25:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.805 03:25:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.805 03:25:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.805 03:25:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.805 03:25:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.805 "name": "raid_bdev1", 00:16:59.805 "uuid": "723cdc40-8b00-4ceb-9dde-e76f0b94f827", 00:16:59.805 "strip_size_kb": 0, 00:16:59.805 "state": "online", 00:16:59.805 "raid_level": "raid1", 00:16:59.805 "superblock": true, 00:16:59.805 "num_base_bdevs": 2, 00:16:59.805 "num_base_bdevs_discovered": 1, 00:16:59.805 "num_base_bdevs_operational": 1, 00:16:59.805 "base_bdevs_list": [ 00:16:59.805 { 00:16:59.805 "name": null, 00:16:59.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.805 "is_configured": false, 00:16:59.805 "data_offset": 0, 00:16:59.805 "data_size": 7936 00:16:59.805 }, 00:16:59.805 { 00:16:59.805 "name": "BaseBdev2", 00:16:59.805 "uuid": "64393f03-cd08-55d0-a133-f2bf9d933808", 00:16:59.805 "is_configured": true, 00:16:59.805 "data_offset": 256, 00:16:59.805 "data_size": 7936 00:16:59.805 } 00:16:59.805 ] 00:16:59.805 }' 00:16:59.805 03:25:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.805 03:25:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:59.805 03:25:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.805 03:25:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:59.805 03:25:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 98904 00:16:59.805 03:25:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 98904 ']' 00:16:59.805 03:25:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 98904 00:16:59.805 03:25:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:16:59.805 03:25:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:59.805 03:25:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98904 00:17:00.066 03:25:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:00.066 03:25:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:00.066 03:25:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98904' 00:17:00.066 killing process with pid 98904 00:17:00.066 03:25:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 98904 00:17:00.066 Received shutdown signal, test time was about 60.000000 seconds 00:17:00.066 00:17:00.066 Latency(us) 00:17:00.066 [2024-11-21T03:25:47.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.066 [2024-11-21T03:25:47.632Z] =================================================================================================================== 00:17:00.066 [2024-11-21T03:25:47.632Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:00.066 [2024-11-21 03:25:47.397673] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:00.066 [2024-11-21 03:25:47.397792] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.066 [2024-11-21 03:25:47.397834] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.066 [2024-11-21 03:25:47.397844] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:00.066 03:25:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 98904 00:17:00.066 [2024-11-21 03:25:47.429710] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:00.326 03:25:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:17:00.326 00:17:00.326 real 0m18.326s 00:17:00.326 user 0m24.277s 00:17:00.326 sys 0m2.745s 00:17:00.326 03:25:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:00.326 ************************************ 00:17:00.326 END TEST raid_rebuild_test_sb_4k 00:17:00.326 ************************************ 00:17:00.326 03:25:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.326 03:25:47 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:17:00.326 03:25:47 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:17:00.326 03:25:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:00.326 03:25:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.326 03:25:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:00.326 ************************************ 00:17:00.326 START TEST raid_state_function_test_sb_md_separate 00:17:00.326 ************************************ 00:17:00.326 03:25:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:00.326 03:25:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:00.326 03:25:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:00.326 03:25:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:00.326 03:25:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:00.326 03:25:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:00.326 03:25:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:00.326 03:25:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:00.326 03:25:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:00.326 03:25:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:00.326 03:25:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:00.326 03:25:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:00.326 03:25:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:00.326 03:25:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:00.326 03:25:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:00.326 03:25:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:00.326 03:25:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:00.326 03:25:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:00.326 03:25:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:00.326 03:25:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:00.326 03:25:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:00.326 03:25:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:00.326 03:25:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:00.326 Process raid pid: 99592 00:17:00.326 03:25:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=99592 00:17:00.326 03:25:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:00.326 03:25:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 99592' 00:17:00.326 03:25:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 99592 00:17:00.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.326 03:25:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 99592 ']' 00:17:00.326 03:25:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.326 03:25:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:00.327 03:25:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.327 03:25:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:00.327 03:25:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:00.327 [2024-11-21 03:25:47.823759] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:17:00.327 [2024-11-21 03:25:47.823900] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:00.587 [2024-11-21 03:25:47.966982] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:00.587 [2024-11-21 03:25:48.002824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.587 [2024-11-21 03:25:48.029334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.587 [2024-11-21 03:25:48.072301] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:00.587 [2024-11-21 03:25:48.072337] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.157 03:25:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:01.157 03:25:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:01.157 03:25:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:01.157 03:25:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.157 03:25:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.157 [2024-11-21 03:25:48.647008] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:01.157 [2024-11-21 03:25:48.647076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:01.157 [2024-11-21 03:25:48.647089] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:01.157 [2024-11-21 03:25:48.647097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:01.157 03:25:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.157 03:25:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:01.157 03:25:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.157 03:25:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.157 03:25:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.157 03:25:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.157 03:25:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:01.157 03:25:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.157 03:25:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.157 03:25:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.157 03:25:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.157 03:25:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.157 03:25:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.157 03:25:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.157 03:25:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.157 03:25:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.157 03:25:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.157 "name": "Existed_Raid", 00:17:01.157 "uuid": "ca4bdea1-4392-492b-8e51-bed70763af27", 00:17:01.157 "strip_size_kb": 0, 00:17:01.157 "state": "configuring", 00:17:01.157 "raid_level": "raid1", 00:17:01.157 "superblock": true, 00:17:01.157 "num_base_bdevs": 2, 00:17:01.157 "num_base_bdevs_discovered": 0, 00:17:01.157 "num_base_bdevs_operational": 2, 00:17:01.157 "base_bdevs_list": [ 00:17:01.157 { 00:17:01.157 "name": "BaseBdev1", 00:17:01.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.157 "is_configured": false, 00:17:01.157 "data_offset": 0, 00:17:01.157 "data_size": 0 00:17:01.157 }, 00:17:01.157 { 00:17:01.157 "name": "BaseBdev2", 00:17:01.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.157 "is_configured": false, 00:17:01.157 "data_offset": 0, 00:17:01.157 "data_size": 0 00:17:01.157 } 00:17:01.157 ] 00:17:01.157 }' 00:17:01.157 03:25:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.157 03:25:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.728 [2024-11-21 03:25:49.059071] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:01.728 [2024-11-21 03:25:49.059106] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.728 [2024-11-21 03:25:49.067095] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:01.728 [2024-11-21 03:25:49.067130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:01.728 [2024-11-21 03:25:49.067140] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:01.728 [2024-11-21 03:25:49.067147] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.728 [2024-11-21 03:25:49.085024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:01.728 BaseBdev1 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.728 [ 00:17:01.728 { 00:17:01.728 "name": "BaseBdev1", 00:17:01.728 "aliases": [ 00:17:01.728 "0d823207-0c7e-4ad9-b341-cd3855dedc59" 00:17:01.728 ], 00:17:01.728 "product_name": "Malloc disk", 00:17:01.728 "block_size": 4096, 00:17:01.728 "num_blocks": 8192, 00:17:01.728 "uuid": "0d823207-0c7e-4ad9-b341-cd3855dedc59", 00:17:01.728 "md_size": 32, 00:17:01.728 "md_interleave": false, 00:17:01.728 "dif_type": 0, 00:17:01.728 "assigned_rate_limits": { 00:17:01.728 "rw_ios_per_sec": 0, 00:17:01.728 "rw_mbytes_per_sec": 0, 00:17:01.728 "r_mbytes_per_sec": 0, 00:17:01.728 "w_mbytes_per_sec": 0 00:17:01.728 }, 00:17:01.728 "claimed": true, 00:17:01.728 "claim_type": "exclusive_write", 00:17:01.728 "zoned": false, 00:17:01.728 "supported_io_types": { 00:17:01.728 "read": true, 00:17:01.728 "write": true, 00:17:01.728 "unmap": true, 00:17:01.728 "flush": true, 00:17:01.728 "reset": true, 00:17:01.728 "nvme_admin": false, 00:17:01.728 "nvme_io": false, 00:17:01.728 "nvme_io_md": false, 00:17:01.728 "write_zeroes": true, 00:17:01.728 "zcopy": true, 00:17:01.728 "get_zone_info": false, 00:17:01.728 "zone_management": false, 00:17:01.728 "zone_append": false, 00:17:01.728 "compare": false, 00:17:01.728 "compare_and_write": false, 00:17:01.728 "abort": true, 00:17:01.728 "seek_hole": false, 00:17:01.728 "seek_data": false, 00:17:01.728 "copy": true, 00:17:01.728 "nvme_iov_md": false 00:17:01.728 }, 00:17:01.728 "memory_domains": [ 00:17:01.728 { 00:17:01.728 "dma_device_id": "system", 00:17:01.728 "dma_device_type": 1 00:17:01.728 }, 00:17:01.728 { 00:17:01.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.728 "dma_device_type": 2 00:17:01.728 } 00:17:01.728 ], 00:17:01.728 "driver_specific": {} 00:17:01.728 } 00:17:01.728 ] 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.728 "name": "Existed_Raid", 00:17:01.728 "uuid": "4ac77ed9-8a89-4de4-98b7-17fa59a6fe37", 00:17:01.728 "strip_size_kb": 0, 00:17:01.728 "state": "configuring", 00:17:01.728 "raid_level": "raid1", 00:17:01.728 "superblock": true, 00:17:01.728 "num_base_bdevs": 2, 00:17:01.728 "num_base_bdevs_discovered": 1, 00:17:01.728 "num_base_bdevs_operational": 2, 00:17:01.728 "base_bdevs_list": [ 00:17:01.728 { 00:17:01.728 "name": "BaseBdev1", 00:17:01.728 "uuid": "0d823207-0c7e-4ad9-b341-cd3855dedc59", 00:17:01.728 "is_configured": true, 00:17:01.728 "data_offset": 256, 00:17:01.728 "data_size": 7936 00:17:01.728 }, 00:17:01.728 { 00:17:01.728 "name": "BaseBdev2", 00:17:01.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.728 "is_configured": false, 00:17:01.728 "data_offset": 0, 00:17:01.728 "data_size": 0 00:17:01.728 } 00:17:01.728 ] 00:17:01.728 }' 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.728 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.298 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:02.298 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.298 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.298 [2024-11-21 03:25:49.597241] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:02.299 [2024-11-21 03:25:49.597347] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:02.299 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.299 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:02.299 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.299 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.299 [2024-11-21 03:25:49.609292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:02.299 [2024-11-21 03:25:49.611174] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:02.299 [2024-11-21 03:25:49.611246] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:02.299 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.299 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:02.299 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:02.299 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:02.299 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:02.299 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:02.299 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.299 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.299 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:02.299 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.299 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.299 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.299 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.299 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.299 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.299 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.299 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.299 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.299 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.299 "name": "Existed_Raid", 00:17:02.299 "uuid": "384bfbcd-6677-466d-9bce-74de8bb2ebf8", 00:17:02.299 "strip_size_kb": 0, 00:17:02.299 "state": "configuring", 00:17:02.299 "raid_level": "raid1", 00:17:02.299 "superblock": true, 00:17:02.299 "num_base_bdevs": 2, 00:17:02.299 "num_base_bdevs_discovered": 1, 00:17:02.299 "num_base_bdevs_operational": 2, 00:17:02.299 "base_bdevs_list": [ 00:17:02.299 { 00:17:02.299 "name": "BaseBdev1", 00:17:02.299 "uuid": "0d823207-0c7e-4ad9-b341-cd3855dedc59", 00:17:02.299 "is_configured": true, 00:17:02.299 "data_offset": 256, 00:17:02.299 "data_size": 7936 00:17:02.299 }, 00:17:02.299 { 00:17:02.299 "name": "BaseBdev2", 00:17:02.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.299 "is_configured": false, 00:17:02.299 "data_offset": 0, 00:17:02.299 "data_size": 0 00:17:02.299 } 00:17:02.299 ] 00:17:02.299 }' 00:17:02.299 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.299 03:25:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.560 [2024-11-21 03:25:50.069136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:02.560 [2024-11-21 03:25:50.069306] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:02.560 [2024-11-21 03:25:50.069322] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:02.560 [2024-11-21 03:25:50.069421] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:02.560 [2024-11-21 03:25:50.069529] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:02.560 [2024-11-21 03:25:50.069538] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:17:02.560 BaseBdev2 00:17:02.560 [2024-11-21 03:25:50.069613] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.560 [ 00:17:02.560 { 00:17:02.560 "name": "BaseBdev2", 00:17:02.560 "aliases": [ 00:17:02.560 "df6c7d28-1163-401a-a1c7-f359ce861e14" 00:17:02.560 ], 00:17:02.560 "product_name": "Malloc disk", 00:17:02.560 "block_size": 4096, 00:17:02.560 "num_blocks": 8192, 00:17:02.560 "uuid": "df6c7d28-1163-401a-a1c7-f359ce861e14", 00:17:02.560 "md_size": 32, 00:17:02.560 "md_interleave": false, 00:17:02.560 "dif_type": 0, 00:17:02.560 "assigned_rate_limits": { 00:17:02.560 "rw_ios_per_sec": 0, 00:17:02.560 "rw_mbytes_per_sec": 0, 00:17:02.560 "r_mbytes_per_sec": 0, 00:17:02.560 "w_mbytes_per_sec": 0 00:17:02.560 }, 00:17:02.560 "claimed": true, 00:17:02.560 "claim_type": "exclusive_write", 00:17:02.560 "zoned": false, 00:17:02.560 "supported_io_types": { 00:17:02.560 "read": true, 00:17:02.560 "write": true, 00:17:02.560 "unmap": true, 00:17:02.560 "flush": true, 00:17:02.560 "reset": true, 00:17:02.560 "nvme_admin": false, 00:17:02.560 "nvme_io": false, 00:17:02.560 "nvme_io_md": false, 00:17:02.560 "write_zeroes": true, 00:17:02.560 "zcopy": true, 00:17:02.560 "get_zone_info": false, 00:17:02.560 "zone_management": false, 00:17:02.560 "zone_append": false, 00:17:02.560 "compare": false, 00:17:02.560 "compare_and_write": false, 00:17:02.560 "abort": true, 00:17:02.560 "seek_hole": false, 00:17:02.560 "seek_data": false, 00:17:02.560 "copy": true, 00:17:02.560 "nvme_iov_md": false 00:17:02.560 }, 00:17:02.560 "memory_domains": [ 00:17:02.560 { 00:17:02.560 "dma_device_id": "system", 00:17:02.560 "dma_device_type": 1 00:17:02.560 }, 00:17:02.560 { 00:17:02.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.560 "dma_device_type": 2 00:17:02.560 } 00:17:02.560 ], 00:17:02.560 "driver_specific": {} 00:17:02.560 } 00:17:02.560 ] 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.560 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.819 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.820 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.820 "name": "Existed_Raid", 00:17:02.820 "uuid": "384bfbcd-6677-466d-9bce-74de8bb2ebf8", 00:17:02.820 "strip_size_kb": 0, 00:17:02.820 "state": "online", 00:17:02.820 "raid_level": "raid1", 00:17:02.820 "superblock": true, 00:17:02.820 "num_base_bdevs": 2, 00:17:02.820 "num_base_bdevs_discovered": 2, 00:17:02.820 "num_base_bdevs_operational": 2, 00:17:02.820 "base_bdevs_list": [ 00:17:02.820 { 00:17:02.820 "name": "BaseBdev1", 00:17:02.820 "uuid": "0d823207-0c7e-4ad9-b341-cd3855dedc59", 00:17:02.820 "is_configured": true, 00:17:02.820 "data_offset": 256, 00:17:02.820 "data_size": 7936 00:17:02.820 }, 00:17:02.820 { 00:17:02.820 "name": "BaseBdev2", 00:17:02.820 "uuid": "df6c7d28-1163-401a-a1c7-f359ce861e14", 00:17:02.820 "is_configured": true, 00:17:02.820 "data_offset": 256, 00:17:02.820 "data_size": 7936 00:17:02.820 } 00:17:02.820 ] 00:17:02.820 }' 00:17:02.820 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.820 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.080 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:03.080 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:03.080 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:03.080 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:03.080 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:03.080 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:03.080 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:03.080 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:03.080 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.080 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.080 [2024-11-21 03:25:50.557549] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:03.080 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.080 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:03.080 "name": "Existed_Raid", 00:17:03.080 "aliases": [ 00:17:03.080 "384bfbcd-6677-466d-9bce-74de8bb2ebf8" 00:17:03.080 ], 00:17:03.080 "product_name": "Raid Volume", 00:17:03.080 "block_size": 4096, 00:17:03.080 "num_blocks": 7936, 00:17:03.080 "uuid": "384bfbcd-6677-466d-9bce-74de8bb2ebf8", 00:17:03.080 "md_size": 32, 00:17:03.080 "md_interleave": false, 00:17:03.080 "dif_type": 0, 00:17:03.080 "assigned_rate_limits": { 00:17:03.080 "rw_ios_per_sec": 0, 00:17:03.080 "rw_mbytes_per_sec": 0, 00:17:03.080 "r_mbytes_per_sec": 0, 00:17:03.080 "w_mbytes_per_sec": 0 00:17:03.080 }, 00:17:03.080 "claimed": false, 00:17:03.080 "zoned": false, 00:17:03.080 "supported_io_types": { 00:17:03.080 "read": true, 00:17:03.080 "write": true, 00:17:03.080 "unmap": false, 00:17:03.080 "flush": false, 00:17:03.080 "reset": true, 00:17:03.080 "nvme_admin": false, 00:17:03.080 "nvme_io": false, 00:17:03.080 "nvme_io_md": false, 00:17:03.080 "write_zeroes": true, 00:17:03.080 "zcopy": false, 00:17:03.080 "get_zone_info": false, 00:17:03.080 "zone_management": false, 00:17:03.080 "zone_append": false, 00:17:03.080 "compare": false, 00:17:03.080 "compare_and_write": false, 00:17:03.080 "abort": false, 00:17:03.080 "seek_hole": false, 00:17:03.080 "seek_data": false, 00:17:03.080 "copy": false, 00:17:03.080 "nvme_iov_md": false 00:17:03.080 }, 00:17:03.080 "memory_domains": [ 00:17:03.080 { 00:17:03.080 "dma_device_id": "system", 00:17:03.080 "dma_device_type": 1 00:17:03.080 }, 00:17:03.080 { 00:17:03.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.080 "dma_device_type": 2 00:17:03.080 }, 00:17:03.080 { 00:17:03.080 "dma_device_id": "system", 00:17:03.080 "dma_device_type": 1 00:17:03.080 }, 00:17:03.080 { 00:17:03.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.080 "dma_device_type": 2 00:17:03.080 } 00:17:03.080 ], 00:17:03.080 "driver_specific": { 00:17:03.080 "raid": { 00:17:03.080 "uuid": "384bfbcd-6677-466d-9bce-74de8bb2ebf8", 00:17:03.080 "strip_size_kb": 0, 00:17:03.080 "state": "online", 00:17:03.080 "raid_level": "raid1", 00:17:03.080 "superblock": true, 00:17:03.080 "num_base_bdevs": 2, 00:17:03.080 "num_base_bdevs_discovered": 2, 00:17:03.080 "num_base_bdevs_operational": 2, 00:17:03.080 "base_bdevs_list": [ 00:17:03.080 { 00:17:03.080 "name": "BaseBdev1", 00:17:03.080 "uuid": "0d823207-0c7e-4ad9-b341-cd3855dedc59", 00:17:03.080 "is_configured": true, 00:17:03.080 "data_offset": 256, 00:17:03.080 "data_size": 7936 00:17:03.080 }, 00:17:03.080 { 00:17:03.080 "name": "BaseBdev2", 00:17:03.080 "uuid": "df6c7d28-1163-401a-a1c7-f359ce861e14", 00:17:03.080 "is_configured": true, 00:17:03.080 "data_offset": 256, 00:17:03.080 "data_size": 7936 00:17:03.080 } 00:17:03.080 ] 00:17:03.080 } 00:17:03.080 } 00:17:03.080 }' 00:17:03.080 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:03.341 BaseBdev2' 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.341 [2024-11-21 03:25:50.789423] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.341 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.341 "name": "Existed_Raid", 00:17:03.341 "uuid": "384bfbcd-6677-466d-9bce-74de8bb2ebf8", 00:17:03.341 "strip_size_kb": 0, 00:17:03.341 "state": "online", 00:17:03.341 "raid_level": "raid1", 00:17:03.341 "superblock": true, 00:17:03.341 "num_base_bdevs": 2, 00:17:03.341 "num_base_bdevs_discovered": 1, 00:17:03.341 "num_base_bdevs_operational": 1, 00:17:03.341 "base_bdevs_list": [ 00:17:03.341 { 00:17:03.341 "name": null, 00:17:03.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.341 "is_configured": false, 00:17:03.341 "data_offset": 0, 00:17:03.342 "data_size": 7936 00:17:03.342 }, 00:17:03.342 { 00:17:03.342 "name": "BaseBdev2", 00:17:03.342 "uuid": "df6c7d28-1163-401a-a1c7-f359ce861e14", 00:17:03.342 "is_configured": true, 00:17:03.342 "data_offset": 256, 00:17:03.342 "data_size": 7936 00:17:03.342 } 00:17:03.342 ] 00:17:03.342 }' 00:17:03.342 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.342 03:25:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.912 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:03.912 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:03.912 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:03.912 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.912 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.912 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.912 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.912 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:03.912 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:03.912 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:03.912 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.912 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.912 [2024-11-21 03:25:51.273840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:03.912 [2024-11-21 03:25:51.273945] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:03.912 [2024-11-21 03:25:51.285967] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:03.912 [2024-11-21 03:25:51.286032] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:03.912 [2024-11-21 03:25:51.286042] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:17:03.912 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.912 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:03.912 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:03.912 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.912 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:03.912 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.912 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.912 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.912 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:03.912 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:03.912 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:03.912 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 99592 00:17:03.912 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 99592 ']' 00:17:03.912 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 99592 00:17:03.912 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:03.912 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:03.912 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99592 00:17:03.912 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:03.912 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:03.912 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99592' 00:17:03.912 killing process with pid 99592 00:17:03.912 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 99592 00:17:03.912 [2024-11-21 03:25:51.386643] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:03.912 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 99592 00:17:03.912 [2024-11-21 03:25:51.387678] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:04.173 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:17:04.173 00:17:04.173 real 0m3.894s 00:17:04.173 user 0m6.072s 00:17:04.173 sys 0m0.907s 00:17:04.173 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:04.173 ************************************ 00:17:04.173 END TEST raid_state_function_test_sb_md_separate 00:17:04.173 ************************************ 00:17:04.173 03:25:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:04.173 03:25:51 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:17:04.173 03:25:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:04.173 03:25:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:04.173 03:25:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:04.173 ************************************ 00:17:04.173 START TEST raid_superblock_test_md_separate 00:17:04.173 ************************************ 00:17:04.173 03:25:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:04.173 03:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:04.173 03:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:04.173 03:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:04.173 03:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:04.173 03:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:04.173 03:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:04.173 03:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:04.173 03:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:04.173 03:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:04.173 03:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:04.173 03:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:04.173 03:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:04.173 03:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:04.173 03:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:04.173 03:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:04.173 03:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=99828 00:17:04.173 03:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:04.173 03:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 99828 00:17:04.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.173 03:25:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 99828 ']' 00:17:04.173 03:25:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.173 03:25:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:04.173 03:25:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.173 03:25:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:04.173 03:25:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:04.434 [2024-11-21 03:25:51.786522] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:17:04.434 [2024-11-21 03:25:51.786641] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99828 ] 00:17:04.434 [2024-11-21 03:25:51.922057] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:04.434 [2024-11-21 03:25:51.961614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.434 [2024-11-21 03:25:51.986902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.694 [2024-11-21 03:25:52.029534] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:04.694 [2024-11-21 03:25:52.029652] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.265 malloc1 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.265 [2024-11-21 03:25:52.621724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:05.265 [2024-11-21 03:25:52.621871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.265 [2024-11-21 03:25:52.621913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:05.265 [2024-11-21 03:25:52.621941] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.265 [2024-11-21 03:25:52.623814] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.265 [2024-11-21 03:25:52.623891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:05.265 pt1 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.265 malloc2 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.265 [2024-11-21 03:25:52.654944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:05.265 [2024-11-21 03:25:52.655059] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.265 [2024-11-21 03:25:52.655098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:05.265 [2024-11-21 03:25:52.655106] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.265 [2024-11-21 03:25:52.656940] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.265 [2024-11-21 03:25:52.657025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:05.265 pt2 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.265 [2024-11-21 03:25:52.666970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:05.265 [2024-11-21 03:25:52.668793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:05.265 [2024-11-21 03:25:52.668978] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:17:05.265 [2024-11-21 03:25:52.669012] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:05.265 [2024-11-21 03:25:52.669124] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:05.265 [2024-11-21 03:25:52.669238] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:17:05.265 [2024-11-21 03:25:52.669248] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:17:05.265 [2024-11-21 03:25:52.669325] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.265 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.266 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.266 03:25:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.266 03:25:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.266 03:25:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.266 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.266 "name": "raid_bdev1", 00:17:05.266 "uuid": "5883d12c-0671-42c5-9543-371268c6b381", 00:17:05.266 "strip_size_kb": 0, 00:17:05.266 "state": "online", 00:17:05.266 "raid_level": "raid1", 00:17:05.266 "superblock": true, 00:17:05.266 "num_base_bdevs": 2, 00:17:05.266 "num_base_bdevs_discovered": 2, 00:17:05.266 "num_base_bdevs_operational": 2, 00:17:05.266 "base_bdevs_list": [ 00:17:05.266 { 00:17:05.266 "name": "pt1", 00:17:05.266 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:05.266 "is_configured": true, 00:17:05.266 "data_offset": 256, 00:17:05.266 "data_size": 7936 00:17:05.266 }, 00:17:05.266 { 00:17:05.266 "name": "pt2", 00:17:05.266 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:05.266 "is_configured": true, 00:17:05.266 "data_offset": 256, 00:17:05.266 "data_size": 7936 00:17:05.266 } 00:17:05.266 ] 00:17:05.266 }' 00:17:05.266 03:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.266 03:25:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:05.836 [2024-11-21 03:25:53.127379] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:05.836 "name": "raid_bdev1", 00:17:05.836 "aliases": [ 00:17:05.836 "5883d12c-0671-42c5-9543-371268c6b381" 00:17:05.836 ], 00:17:05.836 "product_name": "Raid Volume", 00:17:05.836 "block_size": 4096, 00:17:05.836 "num_blocks": 7936, 00:17:05.836 "uuid": "5883d12c-0671-42c5-9543-371268c6b381", 00:17:05.836 "md_size": 32, 00:17:05.836 "md_interleave": false, 00:17:05.836 "dif_type": 0, 00:17:05.836 "assigned_rate_limits": { 00:17:05.836 "rw_ios_per_sec": 0, 00:17:05.836 "rw_mbytes_per_sec": 0, 00:17:05.836 "r_mbytes_per_sec": 0, 00:17:05.836 "w_mbytes_per_sec": 0 00:17:05.836 }, 00:17:05.836 "claimed": false, 00:17:05.836 "zoned": false, 00:17:05.836 "supported_io_types": { 00:17:05.836 "read": true, 00:17:05.836 "write": true, 00:17:05.836 "unmap": false, 00:17:05.836 "flush": false, 00:17:05.836 "reset": true, 00:17:05.836 "nvme_admin": false, 00:17:05.836 "nvme_io": false, 00:17:05.836 "nvme_io_md": false, 00:17:05.836 "write_zeroes": true, 00:17:05.836 "zcopy": false, 00:17:05.836 "get_zone_info": false, 00:17:05.836 "zone_management": false, 00:17:05.836 "zone_append": false, 00:17:05.836 "compare": false, 00:17:05.836 "compare_and_write": false, 00:17:05.836 "abort": false, 00:17:05.836 "seek_hole": false, 00:17:05.836 "seek_data": false, 00:17:05.836 "copy": false, 00:17:05.836 "nvme_iov_md": false 00:17:05.836 }, 00:17:05.836 "memory_domains": [ 00:17:05.836 { 00:17:05.836 "dma_device_id": "system", 00:17:05.836 "dma_device_type": 1 00:17:05.836 }, 00:17:05.836 { 00:17:05.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.836 "dma_device_type": 2 00:17:05.836 }, 00:17:05.836 { 00:17:05.836 "dma_device_id": "system", 00:17:05.836 "dma_device_type": 1 00:17:05.836 }, 00:17:05.836 { 00:17:05.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.836 "dma_device_type": 2 00:17:05.836 } 00:17:05.836 ], 00:17:05.836 "driver_specific": { 00:17:05.836 "raid": { 00:17:05.836 "uuid": "5883d12c-0671-42c5-9543-371268c6b381", 00:17:05.836 "strip_size_kb": 0, 00:17:05.836 "state": "online", 00:17:05.836 "raid_level": "raid1", 00:17:05.836 "superblock": true, 00:17:05.836 "num_base_bdevs": 2, 00:17:05.836 "num_base_bdevs_discovered": 2, 00:17:05.836 "num_base_bdevs_operational": 2, 00:17:05.836 "base_bdevs_list": [ 00:17:05.836 { 00:17:05.836 "name": "pt1", 00:17:05.836 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:05.836 "is_configured": true, 00:17:05.836 "data_offset": 256, 00:17:05.836 "data_size": 7936 00:17:05.836 }, 00:17:05.836 { 00:17:05.836 "name": "pt2", 00:17:05.836 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:05.836 "is_configured": true, 00:17:05.836 "data_offset": 256, 00:17:05.836 "data_size": 7936 00:17:05.836 } 00:17:05.836 ] 00:17:05.836 } 00:17:05.836 } 00:17:05.836 }' 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:05.836 pt2' 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.836 [2024-11-21 03:25:53.375379] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:05.836 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.097 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5883d12c-0671-42c5-9543-371268c6b381 00:17:06.097 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 5883d12c-0671-42c5-9543-371268c6b381 ']' 00:17:06.097 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:06.097 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.097 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.097 [2024-11-21 03:25:53.419197] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:06.097 [2024-11-21 03:25:53.419264] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:06.097 [2024-11-21 03:25:53.419370] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:06.097 [2024-11-21 03:25:53.419436] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:06.097 [2024-11-21 03:25:53.419472] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:17:06.097 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.097 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.097 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.097 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.097 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:06.097 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.097 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:06.097 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:06.097 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:06.097 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:06.097 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.097 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.097 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.097 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:06.097 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:06.097 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.097 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.097 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.097 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.098 [2024-11-21 03:25:53.559255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:06.098 [2024-11-21 03:25:53.561090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:06.098 [2024-11-21 03:25:53.561167] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:06.098 [2024-11-21 03:25:53.561254] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:06.098 [2024-11-21 03:25:53.561294] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:06.098 [2024-11-21 03:25:53.561322] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:17:06.098 request: 00:17:06.098 { 00:17:06.098 "name": "raid_bdev1", 00:17:06.098 "raid_level": "raid1", 00:17:06.098 "base_bdevs": [ 00:17:06.098 "malloc1", 00:17:06.098 "malloc2" 00:17:06.098 ], 00:17:06.098 "superblock": false, 00:17:06.098 "method": "bdev_raid_create", 00:17:06.098 "req_id": 1 00:17:06.098 } 00:17:06.098 Got JSON-RPC error response 00:17:06.098 response: 00:17:06.098 { 00:17:06.098 "code": -17, 00:17:06.098 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:06.098 } 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.098 [2024-11-21 03:25:53.627236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:06.098 [2024-11-21 03:25:53.627285] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.098 [2024-11-21 03:25:53.627298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:06.098 [2024-11-21 03:25:53.627312] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.098 [2024-11-21 03:25:53.629215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.098 [2024-11-21 03:25:53.629251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:06.098 [2024-11-21 03:25:53.629289] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:06.098 [2024-11-21 03:25:53.629317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:06.098 pt1 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.098 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.358 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.358 "name": "raid_bdev1", 00:17:06.358 "uuid": "5883d12c-0671-42c5-9543-371268c6b381", 00:17:06.358 "strip_size_kb": 0, 00:17:06.358 "state": "configuring", 00:17:06.358 "raid_level": "raid1", 00:17:06.358 "superblock": true, 00:17:06.358 "num_base_bdevs": 2, 00:17:06.358 "num_base_bdevs_discovered": 1, 00:17:06.358 "num_base_bdevs_operational": 2, 00:17:06.358 "base_bdevs_list": [ 00:17:06.358 { 00:17:06.358 "name": "pt1", 00:17:06.358 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:06.358 "is_configured": true, 00:17:06.358 "data_offset": 256, 00:17:06.358 "data_size": 7936 00:17:06.358 }, 00:17:06.358 { 00:17:06.358 "name": null, 00:17:06.358 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:06.358 "is_configured": false, 00:17:06.358 "data_offset": 256, 00:17:06.358 "data_size": 7936 00:17:06.358 } 00:17:06.358 ] 00:17:06.358 }' 00:17:06.358 03:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.358 03:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.618 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:06.618 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:06.618 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:06.618 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:06.618 03:25:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.618 03:25:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.618 [2024-11-21 03:25:54.063365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:06.618 [2024-11-21 03:25:54.063468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.618 [2024-11-21 03:25:54.063489] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:06.619 [2024-11-21 03:25:54.063499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.619 [2024-11-21 03:25:54.063639] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.619 [2024-11-21 03:25:54.063655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:06.619 [2024-11-21 03:25:54.063692] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:06.619 [2024-11-21 03:25:54.063708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:06.619 [2024-11-21 03:25:54.063775] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:06.619 [2024-11-21 03:25:54.063785] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:06.619 [2024-11-21 03:25:54.063845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:06.619 [2024-11-21 03:25:54.063927] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:06.619 [2024-11-21 03:25:54.063934] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:06.619 [2024-11-21 03:25:54.063993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.619 pt2 00:17:06.619 03:25:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.619 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:06.619 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:06.619 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:06.619 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.619 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.619 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:06.619 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:06.619 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:06.619 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.619 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.619 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.619 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.619 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.619 03:25:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.619 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.619 03:25:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.619 03:25:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.619 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.619 "name": "raid_bdev1", 00:17:06.619 "uuid": "5883d12c-0671-42c5-9543-371268c6b381", 00:17:06.619 "strip_size_kb": 0, 00:17:06.619 "state": "online", 00:17:06.619 "raid_level": "raid1", 00:17:06.619 "superblock": true, 00:17:06.619 "num_base_bdevs": 2, 00:17:06.619 "num_base_bdevs_discovered": 2, 00:17:06.619 "num_base_bdevs_operational": 2, 00:17:06.619 "base_bdevs_list": [ 00:17:06.619 { 00:17:06.619 "name": "pt1", 00:17:06.619 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:06.619 "is_configured": true, 00:17:06.619 "data_offset": 256, 00:17:06.619 "data_size": 7936 00:17:06.619 }, 00:17:06.619 { 00:17:06.619 "name": "pt2", 00:17:06.619 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:06.619 "is_configured": true, 00:17:06.619 "data_offset": 256, 00:17:06.619 "data_size": 7936 00:17:06.619 } 00:17:06.619 ] 00:17:06.619 }' 00:17:06.619 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.619 03:25:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.189 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:07.189 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:07.189 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:07.189 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:07.189 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:07.189 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:07.189 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:07.189 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:07.189 03:25:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.189 03:25:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.189 [2024-11-21 03:25:54.527731] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:07.189 03:25:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.190 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:07.190 "name": "raid_bdev1", 00:17:07.190 "aliases": [ 00:17:07.190 "5883d12c-0671-42c5-9543-371268c6b381" 00:17:07.190 ], 00:17:07.190 "product_name": "Raid Volume", 00:17:07.190 "block_size": 4096, 00:17:07.190 "num_blocks": 7936, 00:17:07.190 "uuid": "5883d12c-0671-42c5-9543-371268c6b381", 00:17:07.190 "md_size": 32, 00:17:07.190 "md_interleave": false, 00:17:07.190 "dif_type": 0, 00:17:07.190 "assigned_rate_limits": { 00:17:07.190 "rw_ios_per_sec": 0, 00:17:07.190 "rw_mbytes_per_sec": 0, 00:17:07.190 "r_mbytes_per_sec": 0, 00:17:07.190 "w_mbytes_per_sec": 0 00:17:07.190 }, 00:17:07.190 "claimed": false, 00:17:07.190 "zoned": false, 00:17:07.190 "supported_io_types": { 00:17:07.190 "read": true, 00:17:07.190 "write": true, 00:17:07.190 "unmap": false, 00:17:07.190 "flush": false, 00:17:07.190 "reset": true, 00:17:07.190 "nvme_admin": false, 00:17:07.190 "nvme_io": false, 00:17:07.190 "nvme_io_md": false, 00:17:07.190 "write_zeroes": true, 00:17:07.190 "zcopy": false, 00:17:07.190 "get_zone_info": false, 00:17:07.190 "zone_management": false, 00:17:07.190 "zone_append": false, 00:17:07.190 "compare": false, 00:17:07.190 "compare_and_write": false, 00:17:07.190 "abort": false, 00:17:07.190 "seek_hole": false, 00:17:07.190 "seek_data": false, 00:17:07.190 "copy": false, 00:17:07.190 "nvme_iov_md": false 00:17:07.190 }, 00:17:07.190 "memory_domains": [ 00:17:07.190 { 00:17:07.190 "dma_device_id": "system", 00:17:07.190 "dma_device_type": 1 00:17:07.190 }, 00:17:07.190 { 00:17:07.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.190 "dma_device_type": 2 00:17:07.190 }, 00:17:07.190 { 00:17:07.190 "dma_device_id": "system", 00:17:07.190 "dma_device_type": 1 00:17:07.190 }, 00:17:07.190 { 00:17:07.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.190 "dma_device_type": 2 00:17:07.190 } 00:17:07.190 ], 00:17:07.190 "driver_specific": { 00:17:07.190 "raid": { 00:17:07.190 "uuid": "5883d12c-0671-42c5-9543-371268c6b381", 00:17:07.190 "strip_size_kb": 0, 00:17:07.190 "state": "online", 00:17:07.190 "raid_level": "raid1", 00:17:07.190 "superblock": true, 00:17:07.190 "num_base_bdevs": 2, 00:17:07.190 "num_base_bdevs_discovered": 2, 00:17:07.190 "num_base_bdevs_operational": 2, 00:17:07.190 "base_bdevs_list": [ 00:17:07.190 { 00:17:07.190 "name": "pt1", 00:17:07.190 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:07.190 "is_configured": true, 00:17:07.190 "data_offset": 256, 00:17:07.190 "data_size": 7936 00:17:07.190 }, 00:17:07.190 { 00:17:07.190 "name": "pt2", 00:17:07.190 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:07.190 "is_configured": true, 00:17:07.190 "data_offset": 256, 00:17:07.190 "data_size": 7936 00:17:07.190 } 00:17:07.190 ] 00:17:07.190 } 00:17:07.190 } 00:17:07.190 }' 00:17:07.190 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:07.190 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:07.190 pt2' 00:17:07.190 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:07.190 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:07.190 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:07.190 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:07.190 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:07.190 03:25:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.190 03:25:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.190 03:25:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.190 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:07.190 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:07.190 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:07.190 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:07.190 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:07.190 03:25:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.190 03:25:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.190 03:25:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.190 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:07.190 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:07.190 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:07.190 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:07.190 03:25:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.190 03:25:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.190 [2024-11-21 03:25:54.747774] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:07.450 03:25:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.450 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 5883d12c-0671-42c5-9543-371268c6b381 '!=' 5883d12c-0671-42c5-9543-371268c6b381 ']' 00:17:07.450 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:07.450 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:07.450 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:07.450 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:07.450 03:25:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.450 03:25:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.450 [2024-11-21 03:25:54.779564] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:07.450 03:25:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.450 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:07.450 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.450 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.450 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.450 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.450 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:07.450 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.450 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.450 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.450 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.450 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.450 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.450 03:25:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.450 03:25:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.450 03:25:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.450 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.450 "name": "raid_bdev1", 00:17:07.450 "uuid": "5883d12c-0671-42c5-9543-371268c6b381", 00:17:07.450 "strip_size_kb": 0, 00:17:07.450 "state": "online", 00:17:07.450 "raid_level": "raid1", 00:17:07.450 "superblock": true, 00:17:07.450 "num_base_bdevs": 2, 00:17:07.450 "num_base_bdevs_discovered": 1, 00:17:07.450 "num_base_bdevs_operational": 1, 00:17:07.450 "base_bdevs_list": [ 00:17:07.450 { 00:17:07.450 "name": null, 00:17:07.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.450 "is_configured": false, 00:17:07.450 "data_offset": 0, 00:17:07.450 "data_size": 7936 00:17:07.450 }, 00:17:07.450 { 00:17:07.450 "name": "pt2", 00:17:07.450 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:07.450 "is_configured": true, 00:17:07.451 "data_offset": 256, 00:17:07.451 "data_size": 7936 00:17:07.451 } 00:17:07.451 ] 00:17:07.451 }' 00:17:07.451 03:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.451 03:25:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.711 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:07.711 03:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.711 03:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.711 [2024-11-21 03:25:55.211659] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:07.711 [2024-11-21 03:25:55.211737] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:07.711 [2024-11-21 03:25:55.211814] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:07.711 [2024-11-21 03:25:55.211869] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:07.711 [2024-11-21 03:25:55.211913] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:07.711 03:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.711 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:07.711 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.711 03:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.711 03:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.711 03:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.711 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:07.711 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:07.711 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:07.711 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:07.711 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:07.711 03:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.711 03:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.711 03:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.711 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:07.711 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:07.711 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:07.711 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:07.711 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:17:07.711 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:07.711 03:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.711 03:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.711 [2024-11-21 03:25:55.267679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:07.711 [2024-11-21 03:25:55.268177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.711 [2024-11-21 03:25:55.268289] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:07.711 [2024-11-21 03:25:55.268370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.711 [2024-11-21 03:25:55.270439] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.711 [2024-11-21 03:25:55.270580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:07.711 [2024-11-21 03:25:55.270701] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:07.711 [2024-11-21 03:25:55.270767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:07.711 [2024-11-21 03:25:55.270885] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:07.711 [2024-11-21 03:25:55.270925] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:07.711 [2024-11-21 03:25:55.271039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:17:07.711 [2024-11-21 03:25:55.271172] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:07.711 [2024-11-21 03:25:55.271222] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:07.711 [2024-11-21 03:25:55.271324] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.711 pt2 00:17:07.711 03:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.711 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:07.711 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.971 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.971 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.971 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.971 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:07.971 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.971 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.971 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.971 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.971 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.971 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.971 03:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.971 03:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.971 03:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.971 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.971 "name": "raid_bdev1", 00:17:07.971 "uuid": "5883d12c-0671-42c5-9543-371268c6b381", 00:17:07.971 "strip_size_kb": 0, 00:17:07.971 "state": "online", 00:17:07.971 "raid_level": "raid1", 00:17:07.971 "superblock": true, 00:17:07.971 "num_base_bdevs": 2, 00:17:07.971 "num_base_bdevs_discovered": 1, 00:17:07.971 "num_base_bdevs_operational": 1, 00:17:07.971 "base_bdevs_list": [ 00:17:07.971 { 00:17:07.971 "name": null, 00:17:07.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.971 "is_configured": false, 00:17:07.971 "data_offset": 256, 00:17:07.971 "data_size": 7936 00:17:07.971 }, 00:17:07.971 { 00:17:07.971 "name": "pt2", 00:17:07.971 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:07.971 "is_configured": true, 00:17:07.971 "data_offset": 256, 00:17:07.971 "data_size": 7936 00:17:07.971 } 00:17:07.971 ] 00:17:07.971 }' 00:17:07.971 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.971 03:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.231 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:08.231 03:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.231 03:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.231 [2024-11-21 03:25:55.719800] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:08.231 [2024-11-21 03:25:55.719873] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:08.231 [2024-11-21 03:25:55.719932] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:08.231 [2024-11-21 03:25:55.719973] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:08.231 [2024-11-21 03:25:55.719982] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:08.231 03:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.231 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.231 03:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.231 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:08.231 03:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.231 03:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.231 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:08.231 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:08.232 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:08.232 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:08.232 03:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.232 03:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.232 [2024-11-21 03:25:55.779830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:08.232 [2024-11-21 03:25:55.780062] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.232 [2024-11-21 03:25:55.780122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:08.232 [2024-11-21 03:25:55.780154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.232 [2024-11-21 03:25:55.782197] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.232 [2024-11-21 03:25:55.782303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:08.232 [2024-11-21 03:25:55.782439] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:08.232 [2024-11-21 03:25:55.782497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:08.232 [2024-11-21 03:25:55.782636] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:08.232 [2024-11-21 03:25:55.782688] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:08.232 [2024-11-21 03:25:55.782726] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:17:08.232 [2024-11-21 03:25:55.782813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:08.232 [2024-11-21 03:25:55.782935] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:17:08.232 [2024-11-21 03:25:55.782971] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:08.232 [2024-11-21 03:25:55.783060] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:08.232 [2024-11-21 03:25:55.783143] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:17:08.232 [2024-11-21 03:25:55.783152] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:17:08.232 [2024-11-21 03:25:55.783224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.232 pt1 00:17:08.232 03:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.232 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:08.232 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:08.232 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.232 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.232 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.232 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.232 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:08.232 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.232 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.232 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.232 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.232 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.232 03:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.232 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.232 03:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.492 03:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.492 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.492 "name": "raid_bdev1", 00:17:08.492 "uuid": "5883d12c-0671-42c5-9543-371268c6b381", 00:17:08.492 "strip_size_kb": 0, 00:17:08.492 "state": "online", 00:17:08.492 "raid_level": "raid1", 00:17:08.492 "superblock": true, 00:17:08.492 "num_base_bdevs": 2, 00:17:08.492 "num_base_bdevs_discovered": 1, 00:17:08.492 "num_base_bdevs_operational": 1, 00:17:08.492 "base_bdevs_list": [ 00:17:08.492 { 00:17:08.492 "name": null, 00:17:08.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.492 "is_configured": false, 00:17:08.492 "data_offset": 256, 00:17:08.492 "data_size": 7936 00:17:08.492 }, 00:17:08.492 { 00:17:08.492 "name": "pt2", 00:17:08.492 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:08.492 "is_configured": true, 00:17:08.492 "data_offset": 256, 00:17:08.492 "data_size": 7936 00:17:08.492 } 00:17:08.492 ] 00:17:08.492 }' 00:17:08.492 03:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.492 03:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.752 03:25:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:08.752 03:25:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:08.752 03:25:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.752 03:25:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.752 03:25:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.752 03:25:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:09.013 03:25:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:09.013 03:25:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.013 03:25:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:09.013 03:25:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:09.013 [2024-11-21 03:25:56.320203] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:09.013 03:25:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.013 03:25:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 5883d12c-0671-42c5-9543-371268c6b381 '!=' 5883d12c-0671-42c5-9543-371268c6b381 ']' 00:17:09.013 03:25:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 99828 00:17:09.013 03:25:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 99828 ']' 00:17:09.013 03:25:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 99828 00:17:09.013 03:25:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:09.013 03:25:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:09.013 03:25:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99828 00:17:09.013 03:25:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:09.013 killing process with pid 99828 00:17:09.013 03:25:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:09.013 03:25:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99828' 00:17:09.013 03:25:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 99828 00:17:09.013 [2024-11-21 03:25:56.397938] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:09.013 [2024-11-21 03:25:56.398007] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:09.013 [2024-11-21 03:25:56.398057] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:09.013 [2024-11-21 03:25:56.398068] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:17:09.013 03:25:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 99828 00:17:09.013 [2024-11-21 03:25:56.421657] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:09.273 03:25:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:17:09.273 00:17:09.273 real 0m4.952s 00:17:09.273 user 0m8.082s 00:17:09.273 sys 0m1.120s 00:17:09.273 ************************************ 00:17:09.273 END TEST raid_superblock_test_md_separate 00:17:09.273 ************************************ 00:17:09.273 03:25:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:09.273 03:25:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:09.273 03:25:56 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:17:09.273 03:25:56 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:17:09.273 03:25:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:09.273 03:25:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:09.273 03:25:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:09.273 ************************************ 00:17:09.273 START TEST raid_rebuild_test_sb_md_separate 00:17:09.273 ************************************ 00:17:09.273 03:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:09.273 03:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:09.273 03:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:09.273 03:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:09.273 03:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:09.273 03:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:09.273 03:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:09.273 03:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:09.273 03:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:09.273 03:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:09.273 03:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:09.273 03:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:09.274 03:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:09.274 03:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:09.274 03:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:09.274 03:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:09.274 03:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:09.274 03:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:09.274 03:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:09.274 03:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:09.274 03:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:09.274 03:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:09.274 03:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:09.274 03:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:09.274 03:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:09.274 03:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=100147 00:17:09.274 03:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:09.274 03:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 100147 00:17:09.274 03:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 100147 ']' 00:17:09.274 03:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.274 03:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:09.274 03:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.274 03:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:09.274 03:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:09.274 [2024-11-21 03:25:56.829868] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:17:09.274 [2024-11-21 03:25:56.830423] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100147 ] 00:17:09.274 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:09.274 Zero copy mechanism will not be used. 00:17:09.534 [2024-11-21 03:25:56.965685] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:09.534 [2024-11-21 03:25:56.999146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.534 [2024-11-21 03:25:57.024956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.534 [2024-11-21 03:25:57.067940] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:09.534 [2024-11-21 03:25:57.067981] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:10.103 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:10.103 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:10.103 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:10.104 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:17:10.104 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.104 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:10.104 BaseBdev1_malloc 00:17:10.104 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.104 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:10.104 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.104 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:10.365 [2024-11-21 03:25:57.668183] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:10.365 [2024-11-21 03:25:57.668511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.365 [2024-11-21 03:25:57.668587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:10.365 [2024-11-21 03:25:57.668643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.365 [2024-11-21 03:25:57.670593] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.365 [2024-11-21 03:25:57.670742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:10.365 BaseBdev1 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:10.365 BaseBdev2_malloc 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:10.365 [2024-11-21 03:25:57.697619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:10.365 [2024-11-21 03:25:57.697811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.365 [2024-11-21 03:25:57.697921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:10.365 [2024-11-21 03:25:57.698003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.365 [2024-11-21 03:25:57.699944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.365 [2024-11-21 03:25:57.700082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:10.365 BaseBdev2 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:10.365 spare_malloc 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:10.365 spare_delay 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:10.365 [2024-11-21 03:25:57.753473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:10.365 [2024-11-21 03:25:57.753760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.365 [2024-11-21 03:25:57.753885] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:10.365 [2024-11-21 03:25:57.753966] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.365 [2024-11-21 03:25:57.756890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.365 [2024-11-21 03:25:57.756929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:10.365 spare 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:10.365 [2024-11-21 03:25:57.765533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:10.365 [2024-11-21 03:25:57.767674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:10.365 [2024-11-21 03:25:57.767888] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:17:10.365 [2024-11-21 03:25:57.767943] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:10.365 [2024-11-21 03:25:57.768070] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:10.365 [2024-11-21 03:25:57.768236] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:17:10.365 [2024-11-21 03:25:57.768285] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:17:10.365 [2024-11-21 03:25:57.768431] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.365 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.365 "name": "raid_bdev1", 00:17:10.365 "uuid": "76c5e9d5-197c-427c-ae1e-3ea4af2166a5", 00:17:10.366 "strip_size_kb": 0, 00:17:10.366 "state": "online", 00:17:10.366 "raid_level": "raid1", 00:17:10.366 "superblock": true, 00:17:10.366 "num_base_bdevs": 2, 00:17:10.366 "num_base_bdevs_discovered": 2, 00:17:10.366 "num_base_bdevs_operational": 2, 00:17:10.366 "base_bdevs_list": [ 00:17:10.366 { 00:17:10.366 "name": "BaseBdev1", 00:17:10.366 "uuid": "7ee05fed-2c85-5567-a3c0-aeac76bfc590", 00:17:10.366 "is_configured": true, 00:17:10.366 "data_offset": 256, 00:17:10.366 "data_size": 7936 00:17:10.366 }, 00:17:10.366 { 00:17:10.366 "name": "BaseBdev2", 00:17:10.366 "uuid": "eefb1d1c-476f-5651-b743-74d0d5b2448c", 00:17:10.366 "is_configured": true, 00:17:10.366 "data_offset": 256, 00:17:10.366 "data_size": 7936 00:17:10.366 } 00:17:10.366 ] 00:17:10.366 }' 00:17:10.366 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.366 03:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:10.936 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:10.936 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:10.936 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.936 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:10.936 [2024-11-21 03:25:58.217845] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:10.936 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.936 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:10.936 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.936 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.936 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:10.936 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:10.936 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.936 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:10.936 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:10.936 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:10.936 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:10.936 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:10.936 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:10.936 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:10.936 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:10.936 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:10.936 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:10.936 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:10.936 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:10.936 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:10.936 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:10.936 [2024-11-21 03:25:58.493692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:11.196 /dev/nbd0 00:17:11.196 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:11.196 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:11.196 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:11.196 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:11.196 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:11.196 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:11.196 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:11.196 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:11.196 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:11.196 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:11.196 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:11.196 1+0 records in 00:17:11.196 1+0 records out 00:17:11.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230212 s, 17.8 MB/s 00:17:11.196 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:11.196 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:11.196 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:11.196 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:11.196 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:11.196 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:11.196 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:11.196 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:11.196 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:11.196 03:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:11.779 7936+0 records in 00:17:11.779 7936+0 records out 00:17:11.779 32505856 bytes (33 MB, 31 MiB) copied, 0.52468 s, 62.0 MB/s 00:17:11.779 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:11.779 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:11.779 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:11.779 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:11.779 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:11.779 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:11.779 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:11.779 [2024-11-21 03:25:59.312038] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.779 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:11.779 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:11.779 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:11.779 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:11.779 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:11.779 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:12.043 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:12.043 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:12.043 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:12.043 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.043 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:12.043 [2024-11-21 03:25:59.344937] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:12.043 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.043 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:12.043 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:12.043 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:12.043 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:12.043 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:12.043 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:12.043 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.043 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.043 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.043 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.043 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.043 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.043 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.043 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:12.043 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.043 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.043 "name": "raid_bdev1", 00:17:12.043 "uuid": "76c5e9d5-197c-427c-ae1e-3ea4af2166a5", 00:17:12.043 "strip_size_kb": 0, 00:17:12.043 "state": "online", 00:17:12.043 "raid_level": "raid1", 00:17:12.043 "superblock": true, 00:17:12.043 "num_base_bdevs": 2, 00:17:12.043 "num_base_bdevs_discovered": 1, 00:17:12.043 "num_base_bdevs_operational": 1, 00:17:12.043 "base_bdevs_list": [ 00:17:12.043 { 00:17:12.043 "name": null, 00:17:12.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.043 "is_configured": false, 00:17:12.043 "data_offset": 0, 00:17:12.043 "data_size": 7936 00:17:12.043 }, 00:17:12.043 { 00:17:12.043 "name": "BaseBdev2", 00:17:12.043 "uuid": "eefb1d1c-476f-5651-b743-74d0d5b2448c", 00:17:12.043 "is_configured": true, 00:17:12.043 "data_offset": 256, 00:17:12.043 "data_size": 7936 00:17:12.043 } 00:17:12.043 ] 00:17:12.043 }' 00:17:12.043 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.043 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:12.303 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:12.303 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.303 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:12.303 [2024-11-21 03:25:59.793075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:12.303 [2024-11-21 03:25:59.795680] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d670 00:17:12.303 [2024-11-21 03:25:59.797684] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:12.303 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.303 03:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:13.290 03:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:13.290 03:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.290 03:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:13.290 03:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:13.290 03:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.290 03:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.290 03:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.290 03:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.290 03:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:13.290 03:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.551 03:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.551 "name": "raid_bdev1", 00:17:13.551 "uuid": "76c5e9d5-197c-427c-ae1e-3ea4af2166a5", 00:17:13.551 "strip_size_kb": 0, 00:17:13.551 "state": "online", 00:17:13.551 "raid_level": "raid1", 00:17:13.551 "superblock": true, 00:17:13.551 "num_base_bdevs": 2, 00:17:13.551 "num_base_bdevs_discovered": 2, 00:17:13.551 "num_base_bdevs_operational": 2, 00:17:13.551 "process": { 00:17:13.551 "type": "rebuild", 00:17:13.551 "target": "spare", 00:17:13.551 "progress": { 00:17:13.551 "blocks": 2560, 00:17:13.551 "percent": 32 00:17:13.551 } 00:17:13.551 }, 00:17:13.551 "base_bdevs_list": [ 00:17:13.551 { 00:17:13.551 "name": "spare", 00:17:13.551 "uuid": "694d02f0-ab87-566b-b510-c4988ea99e1f", 00:17:13.551 "is_configured": true, 00:17:13.551 "data_offset": 256, 00:17:13.551 "data_size": 7936 00:17:13.551 }, 00:17:13.551 { 00:17:13.551 "name": "BaseBdev2", 00:17:13.551 "uuid": "eefb1d1c-476f-5651-b743-74d0d5b2448c", 00:17:13.551 "is_configured": true, 00:17:13.551 "data_offset": 256, 00:17:13.551 "data_size": 7936 00:17:13.551 } 00:17:13.551 ] 00:17:13.551 }' 00:17:13.551 03:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.551 03:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:13.551 03:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.551 03:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:13.551 03:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:13.551 03:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.551 03:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:13.551 [2024-11-21 03:26:00.940039] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:13.551 [2024-11-21 03:26:01.004496] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:13.551 [2024-11-21 03:26:01.004617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.551 [2024-11-21 03:26:01.004634] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:13.551 [2024-11-21 03:26:01.004644] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:13.551 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.551 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:13.551 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.551 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.551 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.551 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.551 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:13.551 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.551 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.551 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.551 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.551 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.551 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.551 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.551 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:13.551 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.551 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.551 "name": "raid_bdev1", 00:17:13.551 "uuid": "76c5e9d5-197c-427c-ae1e-3ea4af2166a5", 00:17:13.551 "strip_size_kb": 0, 00:17:13.551 "state": "online", 00:17:13.551 "raid_level": "raid1", 00:17:13.551 "superblock": true, 00:17:13.551 "num_base_bdevs": 2, 00:17:13.551 "num_base_bdevs_discovered": 1, 00:17:13.551 "num_base_bdevs_operational": 1, 00:17:13.551 "base_bdevs_list": [ 00:17:13.551 { 00:17:13.551 "name": null, 00:17:13.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.551 "is_configured": false, 00:17:13.551 "data_offset": 0, 00:17:13.551 "data_size": 7936 00:17:13.552 }, 00:17:13.552 { 00:17:13.552 "name": "BaseBdev2", 00:17:13.552 "uuid": "eefb1d1c-476f-5651-b743-74d0d5b2448c", 00:17:13.552 "is_configured": true, 00:17:13.552 "data_offset": 256, 00:17:13.552 "data_size": 7936 00:17:13.552 } 00:17:13.552 ] 00:17:13.552 }' 00:17:13.552 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.552 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.121 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:14.121 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.121 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:14.121 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:14.121 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.121 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.121 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.121 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.121 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.121 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.121 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.121 "name": "raid_bdev1", 00:17:14.121 "uuid": "76c5e9d5-197c-427c-ae1e-3ea4af2166a5", 00:17:14.121 "strip_size_kb": 0, 00:17:14.121 "state": "online", 00:17:14.121 "raid_level": "raid1", 00:17:14.121 "superblock": true, 00:17:14.121 "num_base_bdevs": 2, 00:17:14.121 "num_base_bdevs_discovered": 1, 00:17:14.121 "num_base_bdevs_operational": 1, 00:17:14.121 "base_bdevs_list": [ 00:17:14.121 { 00:17:14.121 "name": null, 00:17:14.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.121 "is_configured": false, 00:17:14.121 "data_offset": 0, 00:17:14.121 "data_size": 7936 00:17:14.121 }, 00:17:14.121 { 00:17:14.121 "name": "BaseBdev2", 00:17:14.121 "uuid": "eefb1d1c-476f-5651-b743-74d0d5b2448c", 00:17:14.121 "is_configured": true, 00:17:14.121 "data_offset": 256, 00:17:14.121 "data_size": 7936 00:17:14.121 } 00:17:14.121 ] 00:17:14.121 }' 00:17:14.121 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.121 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:14.121 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.121 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:14.121 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:14.121 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.121 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.122 [2024-11-21 03:26:01.584216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:14.122 [2024-11-21 03:26:01.586626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d740 00:17:14.122 [2024-11-21 03:26:01.588519] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:14.122 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.122 03:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:15.061 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:15.061 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.061 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:15.061 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:15.061 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.061 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.061 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.061 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.061 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.061 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.323 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.323 "name": "raid_bdev1", 00:17:15.323 "uuid": "76c5e9d5-197c-427c-ae1e-3ea4af2166a5", 00:17:15.323 "strip_size_kb": 0, 00:17:15.323 "state": "online", 00:17:15.323 "raid_level": "raid1", 00:17:15.323 "superblock": true, 00:17:15.323 "num_base_bdevs": 2, 00:17:15.323 "num_base_bdevs_discovered": 2, 00:17:15.323 "num_base_bdevs_operational": 2, 00:17:15.323 "process": { 00:17:15.323 "type": "rebuild", 00:17:15.323 "target": "spare", 00:17:15.323 "progress": { 00:17:15.323 "blocks": 2560, 00:17:15.323 "percent": 32 00:17:15.323 } 00:17:15.323 }, 00:17:15.323 "base_bdevs_list": [ 00:17:15.323 { 00:17:15.323 "name": "spare", 00:17:15.323 "uuid": "694d02f0-ab87-566b-b510-c4988ea99e1f", 00:17:15.323 "is_configured": true, 00:17:15.323 "data_offset": 256, 00:17:15.323 "data_size": 7936 00:17:15.323 }, 00:17:15.323 { 00:17:15.323 "name": "BaseBdev2", 00:17:15.323 "uuid": "eefb1d1c-476f-5651-b743-74d0d5b2448c", 00:17:15.323 "is_configured": true, 00:17:15.323 "data_offset": 256, 00:17:15.323 "data_size": 7936 00:17:15.323 } 00:17:15.323 ] 00:17:15.323 }' 00:17:15.323 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.323 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:15.323 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.323 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:15.323 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:15.323 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:15.323 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:15.323 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:15.323 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:15.323 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:15.323 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=597 00:17:15.323 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:15.323 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:15.323 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.323 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:15.323 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:15.324 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.324 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.324 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.324 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.324 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.324 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.324 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.324 "name": "raid_bdev1", 00:17:15.324 "uuid": "76c5e9d5-197c-427c-ae1e-3ea4af2166a5", 00:17:15.324 "strip_size_kb": 0, 00:17:15.324 "state": "online", 00:17:15.324 "raid_level": "raid1", 00:17:15.324 "superblock": true, 00:17:15.324 "num_base_bdevs": 2, 00:17:15.324 "num_base_bdevs_discovered": 2, 00:17:15.324 "num_base_bdevs_operational": 2, 00:17:15.324 "process": { 00:17:15.324 "type": "rebuild", 00:17:15.324 "target": "spare", 00:17:15.324 "progress": { 00:17:15.324 "blocks": 2816, 00:17:15.324 "percent": 35 00:17:15.324 } 00:17:15.324 }, 00:17:15.324 "base_bdevs_list": [ 00:17:15.324 { 00:17:15.324 "name": "spare", 00:17:15.324 "uuid": "694d02f0-ab87-566b-b510-c4988ea99e1f", 00:17:15.324 "is_configured": true, 00:17:15.324 "data_offset": 256, 00:17:15.324 "data_size": 7936 00:17:15.324 }, 00:17:15.324 { 00:17:15.324 "name": "BaseBdev2", 00:17:15.324 "uuid": "eefb1d1c-476f-5651-b743-74d0d5b2448c", 00:17:15.324 "is_configured": true, 00:17:15.324 "data_offset": 256, 00:17:15.324 "data_size": 7936 00:17:15.324 } 00:17:15.324 ] 00:17:15.324 }' 00:17:15.324 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.324 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:15.324 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.324 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:15.324 03:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:16.707 03:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:16.707 03:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.707 03:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.707 03:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.707 03:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.707 03:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.707 03:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.707 03:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.707 03:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.707 03:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.707 03:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.707 03:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.707 "name": "raid_bdev1", 00:17:16.707 "uuid": "76c5e9d5-197c-427c-ae1e-3ea4af2166a5", 00:17:16.707 "strip_size_kb": 0, 00:17:16.707 "state": "online", 00:17:16.707 "raid_level": "raid1", 00:17:16.707 "superblock": true, 00:17:16.707 "num_base_bdevs": 2, 00:17:16.707 "num_base_bdevs_discovered": 2, 00:17:16.707 "num_base_bdevs_operational": 2, 00:17:16.707 "process": { 00:17:16.707 "type": "rebuild", 00:17:16.707 "target": "spare", 00:17:16.707 "progress": { 00:17:16.707 "blocks": 5632, 00:17:16.707 "percent": 70 00:17:16.707 } 00:17:16.707 }, 00:17:16.707 "base_bdevs_list": [ 00:17:16.707 { 00:17:16.707 "name": "spare", 00:17:16.707 "uuid": "694d02f0-ab87-566b-b510-c4988ea99e1f", 00:17:16.707 "is_configured": true, 00:17:16.707 "data_offset": 256, 00:17:16.707 "data_size": 7936 00:17:16.707 }, 00:17:16.707 { 00:17:16.707 "name": "BaseBdev2", 00:17:16.707 "uuid": "eefb1d1c-476f-5651-b743-74d0d5b2448c", 00:17:16.707 "is_configured": true, 00:17:16.707 "data_offset": 256, 00:17:16.707 "data_size": 7936 00:17:16.707 } 00:17:16.707 ] 00:17:16.707 }' 00:17:16.707 03:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.707 03:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:16.707 03:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.707 03:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:16.707 03:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:17.276 [2024-11-21 03:26:04.704520] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:17.276 [2024-11-21 03:26:04.704588] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:17.276 [2024-11-21 03:26:04.704676] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.536 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:17.536 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.536 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.536 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.536 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.536 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.536 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.536 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.536 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.536 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:17.536 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.536 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.536 "name": "raid_bdev1", 00:17:17.536 "uuid": "76c5e9d5-197c-427c-ae1e-3ea4af2166a5", 00:17:17.536 "strip_size_kb": 0, 00:17:17.536 "state": "online", 00:17:17.536 "raid_level": "raid1", 00:17:17.536 "superblock": true, 00:17:17.536 "num_base_bdevs": 2, 00:17:17.536 "num_base_bdevs_discovered": 2, 00:17:17.536 "num_base_bdevs_operational": 2, 00:17:17.536 "base_bdevs_list": [ 00:17:17.536 { 00:17:17.536 "name": "spare", 00:17:17.536 "uuid": "694d02f0-ab87-566b-b510-c4988ea99e1f", 00:17:17.536 "is_configured": true, 00:17:17.536 "data_offset": 256, 00:17:17.536 "data_size": 7936 00:17:17.536 }, 00:17:17.536 { 00:17:17.536 "name": "BaseBdev2", 00:17:17.536 "uuid": "eefb1d1c-476f-5651-b743-74d0d5b2448c", 00:17:17.536 "is_configured": true, 00:17:17.536 "data_offset": 256, 00:17:17.536 "data_size": 7936 00:17:17.536 } 00:17:17.536 ] 00:17:17.536 }' 00:17:17.536 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.797 "name": "raid_bdev1", 00:17:17.797 "uuid": "76c5e9d5-197c-427c-ae1e-3ea4af2166a5", 00:17:17.797 "strip_size_kb": 0, 00:17:17.797 "state": "online", 00:17:17.797 "raid_level": "raid1", 00:17:17.797 "superblock": true, 00:17:17.797 "num_base_bdevs": 2, 00:17:17.797 "num_base_bdevs_discovered": 2, 00:17:17.797 "num_base_bdevs_operational": 2, 00:17:17.797 "base_bdevs_list": [ 00:17:17.797 { 00:17:17.797 "name": "spare", 00:17:17.797 "uuid": "694d02f0-ab87-566b-b510-c4988ea99e1f", 00:17:17.797 "is_configured": true, 00:17:17.797 "data_offset": 256, 00:17:17.797 "data_size": 7936 00:17:17.797 }, 00:17:17.797 { 00:17:17.797 "name": "BaseBdev2", 00:17:17.797 "uuid": "eefb1d1c-476f-5651-b743-74d0d5b2448c", 00:17:17.797 "is_configured": true, 00:17:17.797 "data_offset": 256, 00:17:17.797 "data_size": 7936 00:17:17.797 } 00:17:17.797 ] 00:17:17.797 }' 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.797 "name": "raid_bdev1", 00:17:17.797 "uuid": "76c5e9d5-197c-427c-ae1e-3ea4af2166a5", 00:17:17.797 "strip_size_kb": 0, 00:17:17.797 "state": "online", 00:17:17.797 "raid_level": "raid1", 00:17:17.797 "superblock": true, 00:17:17.797 "num_base_bdevs": 2, 00:17:17.797 "num_base_bdevs_discovered": 2, 00:17:17.797 "num_base_bdevs_operational": 2, 00:17:17.797 "base_bdevs_list": [ 00:17:17.797 { 00:17:17.797 "name": "spare", 00:17:17.797 "uuid": "694d02f0-ab87-566b-b510-c4988ea99e1f", 00:17:17.797 "is_configured": true, 00:17:17.797 "data_offset": 256, 00:17:17.797 "data_size": 7936 00:17:17.797 }, 00:17:17.797 { 00:17:17.797 "name": "BaseBdev2", 00:17:17.797 "uuid": "eefb1d1c-476f-5651-b743-74d0d5b2448c", 00:17:17.797 "is_configured": true, 00:17:17.797 "data_offset": 256, 00:17:17.797 "data_size": 7936 00:17:17.797 } 00:17:17.797 ] 00:17:17.797 }' 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.797 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.367 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:18.367 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.367 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.367 [2024-11-21 03:26:05.743659] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:18.367 [2024-11-21 03:26:05.743750] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:18.367 [2024-11-21 03:26:05.743854] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:18.367 [2024-11-21 03:26:05.743935] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:18.367 [2024-11-21 03:26:05.743968] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:17:18.367 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.367 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.367 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.367 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:17:18.367 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.367 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.367 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:18.367 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:18.367 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:18.367 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:18.367 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:18.367 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:18.367 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:18.367 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:18.367 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:18.367 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:18.367 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:18.367 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:18.367 03:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:18.628 /dev/nbd0 00:17:18.628 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:18.628 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:18.628 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:18.628 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:18.628 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:18.628 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:18.628 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:18.628 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:18.628 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:18.628 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:18.628 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:18.628 1+0 records in 00:17:18.628 1+0 records out 00:17:18.628 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000595013 s, 6.9 MB/s 00:17:18.628 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:18.628 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:18.628 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:18.628 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:18.628 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:18.628 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:18.628 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:18.628 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:18.889 /dev/nbd1 00:17:18.889 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:18.889 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:18.889 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:18.889 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:18.889 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:18.889 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:18.889 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:18.889 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:18.889 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:18.889 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:18.889 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:18.889 1+0 records in 00:17:18.889 1+0 records out 00:17:18.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311687 s, 13.1 MB/s 00:17:18.889 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:18.889 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:18.889 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:18.889 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:18.889 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:18.889 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:18.889 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:18.889 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:18.889 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:18.889 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:18.889 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:18.889 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:18.889 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:18.889 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:18.889 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:19.149 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:19.149 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:19.149 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:19.149 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:19.149 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:19.149 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:19.149 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:19.149 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:19.149 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:19.149 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:19.408 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:19.408 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:19.408 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:19.408 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:19.408 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:19.408 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:19.408 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:19.408 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:19.408 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:19.408 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:19.408 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.408 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.408 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.408 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:19.408 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.408 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.408 [2024-11-21 03:26:06.817113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:19.408 [2024-11-21 03:26:06.817172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.408 [2024-11-21 03:26:06.817196] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:19.408 [2024-11-21 03:26:06.817205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.408 [2024-11-21 03:26:06.819006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.408 [2024-11-21 03:26:06.819048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:19.408 [2024-11-21 03:26:06.819107] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:19.409 [2024-11-21 03:26:06.819160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:19.409 [2024-11-21 03:26:06.819261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:19.409 spare 00:17:19.409 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.409 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:19.409 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.409 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.409 [2024-11-21 03:26:06.919318] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:19.409 [2024-11-21 03:26:06.919349] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:19.409 [2024-11-21 03:26:06.919443] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1f60 00:17:19.409 [2024-11-21 03:26:06.919537] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:19.409 [2024-11-21 03:26:06.919545] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:19.409 [2024-11-21 03:26:06.919629] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.409 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.409 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:19.409 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.409 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.409 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.409 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.409 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:19.409 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.409 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.409 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.409 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.409 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.409 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.409 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.409 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.409 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.668 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.668 "name": "raid_bdev1", 00:17:19.668 "uuid": "76c5e9d5-197c-427c-ae1e-3ea4af2166a5", 00:17:19.668 "strip_size_kb": 0, 00:17:19.668 "state": "online", 00:17:19.668 "raid_level": "raid1", 00:17:19.668 "superblock": true, 00:17:19.668 "num_base_bdevs": 2, 00:17:19.668 "num_base_bdevs_discovered": 2, 00:17:19.668 "num_base_bdevs_operational": 2, 00:17:19.668 "base_bdevs_list": [ 00:17:19.668 { 00:17:19.668 "name": "spare", 00:17:19.668 "uuid": "694d02f0-ab87-566b-b510-c4988ea99e1f", 00:17:19.668 "is_configured": true, 00:17:19.668 "data_offset": 256, 00:17:19.668 "data_size": 7936 00:17:19.668 }, 00:17:19.668 { 00:17:19.668 "name": "BaseBdev2", 00:17:19.668 "uuid": "eefb1d1c-476f-5651-b743-74d0d5b2448c", 00:17:19.668 "is_configured": true, 00:17:19.668 "data_offset": 256, 00:17:19.668 "data_size": 7936 00:17:19.668 } 00:17:19.668 ] 00:17:19.668 }' 00:17:19.668 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.668 03:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.928 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:19.928 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.928 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:19.928 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:19.928 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.928 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.928 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.928 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.928 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.928 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.928 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.928 "name": "raid_bdev1", 00:17:19.928 "uuid": "76c5e9d5-197c-427c-ae1e-3ea4af2166a5", 00:17:19.928 "strip_size_kb": 0, 00:17:19.928 "state": "online", 00:17:19.928 "raid_level": "raid1", 00:17:19.928 "superblock": true, 00:17:19.928 "num_base_bdevs": 2, 00:17:19.928 "num_base_bdevs_discovered": 2, 00:17:19.928 "num_base_bdevs_operational": 2, 00:17:19.928 "base_bdevs_list": [ 00:17:19.928 { 00:17:19.928 "name": "spare", 00:17:19.928 "uuid": "694d02f0-ab87-566b-b510-c4988ea99e1f", 00:17:19.928 "is_configured": true, 00:17:19.928 "data_offset": 256, 00:17:19.928 "data_size": 7936 00:17:19.928 }, 00:17:19.928 { 00:17:19.928 "name": "BaseBdev2", 00:17:19.928 "uuid": "eefb1d1c-476f-5651-b743-74d0d5b2448c", 00:17:19.928 "is_configured": true, 00:17:19.928 "data_offset": 256, 00:17:19.928 "data_size": 7936 00:17:19.928 } 00:17:19.928 ] 00:17:19.928 }' 00:17:19.928 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.928 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:19.928 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.188 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:20.188 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:20.188 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.188 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.188 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.188 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.188 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.188 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:20.188 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.188 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.188 [2024-11-21 03:26:07.573340] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:20.188 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.188 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:20.188 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.188 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.188 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:20.188 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:20.188 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:20.188 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.189 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.189 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.189 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.189 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.189 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.189 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.189 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.189 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.189 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.189 "name": "raid_bdev1", 00:17:20.189 "uuid": "76c5e9d5-197c-427c-ae1e-3ea4af2166a5", 00:17:20.189 "strip_size_kb": 0, 00:17:20.189 "state": "online", 00:17:20.189 "raid_level": "raid1", 00:17:20.189 "superblock": true, 00:17:20.189 "num_base_bdevs": 2, 00:17:20.189 "num_base_bdevs_discovered": 1, 00:17:20.189 "num_base_bdevs_operational": 1, 00:17:20.189 "base_bdevs_list": [ 00:17:20.189 { 00:17:20.189 "name": null, 00:17:20.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.189 "is_configured": false, 00:17:20.189 "data_offset": 0, 00:17:20.189 "data_size": 7936 00:17:20.189 }, 00:17:20.189 { 00:17:20.189 "name": "BaseBdev2", 00:17:20.189 "uuid": "eefb1d1c-476f-5651-b743-74d0d5b2448c", 00:17:20.189 "is_configured": true, 00:17:20.189 "data_offset": 256, 00:17:20.189 "data_size": 7936 00:17:20.189 } 00:17:20.189 ] 00:17:20.189 }' 00:17:20.189 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.189 03:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.758 03:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:20.758 03:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.758 03:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.759 [2024-11-21 03:26:08.025485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:20.759 [2024-11-21 03:26:08.025690] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:20.759 [2024-11-21 03:26:08.025754] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:20.759 [2024-11-21 03:26:08.025815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:20.759 [2024-11-21 03:26:08.028281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c2030 00:17:20.759 [2024-11-21 03:26:08.030101] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:20.759 03:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.759 03:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:21.699 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:21.699 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.699 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:21.699 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:21.699 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.699 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.699 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.699 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.699 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.699 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.699 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.699 "name": "raid_bdev1", 00:17:21.699 "uuid": "76c5e9d5-197c-427c-ae1e-3ea4af2166a5", 00:17:21.699 "strip_size_kb": 0, 00:17:21.699 "state": "online", 00:17:21.699 "raid_level": "raid1", 00:17:21.699 "superblock": true, 00:17:21.699 "num_base_bdevs": 2, 00:17:21.699 "num_base_bdevs_discovered": 2, 00:17:21.699 "num_base_bdevs_operational": 2, 00:17:21.699 "process": { 00:17:21.699 "type": "rebuild", 00:17:21.699 "target": "spare", 00:17:21.699 "progress": { 00:17:21.699 "blocks": 2560, 00:17:21.699 "percent": 32 00:17:21.699 } 00:17:21.699 }, 00:17:21.699 "base_bdevs_list": [ 00:17:21.699 { 00:17:21.699 "name": "spare", 00:17:21.699 "uuid": "694d02f0-ab87-566b-b510-c4988ea99e1f", 00:17:21.699 "is_configured": true, 00:17:21.699 "data_offset": 256, 00:17:21.699 "data_size": 7936 00:17:21.699 }, 00:17:21.699 { 00:17:21.699 "name": "BaseBdev2", 00:17:21.699 "uuid": "eefb1d1c-476f-5651-b743-74d0d5b2448c", 00:17:21.699 "is_configured": true, 00:17:21.699 "data_offset": 256, 00:17:21.699 "data_size": 7936 00:17:21.699 } 00:17:21.699 ] 00:17:21.699 }' 00:17:21.699 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.699 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:21.699 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.699 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:21.699 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:21.699 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.699 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.699 [2024-11-21 03:26:09.183459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:21.699 [2024-11-21 03:26:09.236281] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:21.699 [2024-11-21 03:26:09.236385] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.699 [2024-11-21 03:26:09.236416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:21.699 [2024-11-21 03:26:09.236438] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:21.700 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.700 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:21.700 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.700 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.700 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:21.700 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:21.700 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:21.700 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.700 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.700 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.700 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.700 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.700 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.700 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.700 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.959 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.959 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.959 "name": "raid_bdev1", 00:17:21.959 "uuid": "76c5e9d5-197c-427c-ae1e-3ea4af2166a5", 00:17:21.959 "strip_size_kb": 0, 00:17:21.959 "state": "online", 00:17:21.959 "raid_level": "raid1", 00:17:21.959 "superblock": true, 00:17:21.959 "num_base_bdevs": 2, 00:17:21.959 "num_base_bdevs_discovered": 1, 00:17:21.959 "num_base_bdevs_operational": 1, 00:17:21.959 "base_bdevs_list": [ 00:17:21.959 { 00:17:21.959 "name": null, 00:17:21.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.959 "is_configured": false, 00:17:21.959 "data_offset": 0, 00:17:21.959 "data_size": 7936 00:17:21.959 }, 00:17:21.959 { 00:17:21.959 "name": "BaseBdev2", 00:17:21.959 "uuid": "eefb1d1c-476f-5651-b743-74d0d5b2448c", 00:17:21.959 "is_configured": true, 00:17:21.959 "data_offset": 256, 00:17:21.959 "data_size": 7936 00:17:21.959 } 00:17:21.959 ] 00:17:21.959 }' 00:17:21.959 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.959 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.219 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:22.219 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.219 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.219 [2024-11-21 03:26:09.687632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:22.219 [2024-11-21 03:26:09.687688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.219 [2024-11-21 03:26:09.687711] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:22.219 [2024-11-21 03:26:09.687722] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.219 [2024-11-21 03:26:09.687924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.219 [2024-11-21 03:26:09.687941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:22.219 [2024-11-21 03:26:09.687990] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:22.219 [2024-11-21 03:26:09.688009] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:22.219 [2024-11-21 03:26:09.688030] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:22.219 [2024-11-21 03:26:09.688053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:22.219 [2024-11-21 03:26:09.690087] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c2100 00:17:22.219 [2024-11-21 03:26:09.691864] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:22.219 spare 00:17:22.219 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.219 03:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:23.159 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:23.159 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.159 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:23.160 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:23.160 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.160 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.160 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.160 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.160 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.420 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.420 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.420 "name": "raid_bdev1", 00:17:23.420 "uuid": "76c5e9d5-197c-427c-ae1e-3ea4af2166a5", 00:17:23.420 "strip_size_kb": 0, 00:17:23.420 "state": "online", 00:17:23.420 "raid_level": "raid1", 00:17:23.420 "superblock": true, 00:17:23.420 "num_base_bdevs": 2, 00:17:23.420 "num_base_bdevs_discovered": 2, 00:17:23.420 "num_base_bdevs_operational": 2, 00:17:23.420 "process": { 00:17:23.420 "type": "rebuild", 00:17:23.420 "target": "spare", 00:17:23.420 "progress": { 00:17:23.420 "blocks": 2560, 00:17:23.420 "percent": 32 00:17:23.420 } 00:17:23.420 }, 00:17:23.420 "base_bdevs_list": [ 00:17:23.420 { 00:17:23.420 "name": "spare", 00:17:23.420 "uuid": "694d02f0-ab87-566b-b510-c4988ea99e1f", 00:17:23.420 "is_configured": true, 00:17:23.420 "data_offset": 256, 00:17:23.420 "data_size": 7936 00:17:23.420 }, 00:17:23.420 { 00:17:23.420 "name": "BaseBdev2", 00:17:23.420 "uuid": "eefb1d1c-476f-5651-b743-74d0d5b2448c", 00:17:23.420 "is_configured": true, 00:17:23.420 "data_offset": 256, 00:17:23.420 "data_size": 7936 00:17:23.420 } 00:17:23.420 ] 00:17:23.420 }' 00:17:23.420 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.420 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:23.420 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.420 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:23.420 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:23.420 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.420 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.420 [2024-11-21 03:26:10.861543] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:23.420 [2024-11-21 03:26:10.897904] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:23.420 [2024-11-21 03:26:10.897957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:23.420 [2024-11-21 03:26:10.897973] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:23.420 [2024-11-21 03:26:10.897980] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:23.420 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.420 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:23.420 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.420 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.420 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:23.420 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:23.420 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:23.420 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.420 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.421 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.421 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.421 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.421 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.421 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.421 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.421 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.421 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.421 "name": "raid_bdev1", 00:17:23.421 "uuid": "76c5e9d5-197c-427c-ae1e-3ea4af2166a5", 00:17:23.421 "strip_size_kb": 0, 00:17:23.421 "state": "online", 00:17:23.421 "raid_level": "raid1", 00:17:23.421 "superblock": true, 00:17:23.421 "num_base_bdevs": 2, 00:17:23.421 "num_base_bdevs_discovered": 1, 00:17:23.421 "num_base_bdevs_operational": 1, 00:17:23.421 "base_bdevs_list": [ 00:17:23.421 { 00:17:23.421 "name": null, 00:17:23.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.421 "is_configured": false, 00:17:23.421 "data_offset": 0, 00:17:23.421 "data_size": 7936 00:17:23.421 }, 00:17:23.421 { 00:17:23.421 "name": "BaseBdev2", 00:17:23.421 "uuid": "eefb1d1c-476f-5651-b743-74d0d5b2448c", 00:17:23.421 "is_configured": true, 00:17:23.421 "data_offset": 256, 00:17:23.421 "data_size": 7936 00:17:23.421 } 00:17:23.421 ] 00:17:23.421 }' 00:17:23.421 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.421 03:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.991 03:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:23.991 03:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.991 03:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:23.991 03:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:23.991 03:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.991 03:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.991 03:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.991 03:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.991 03:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.991 03:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.991 03:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.991 "name": "raid_bdev1", 00:17:23.991 "uuid": "76c5e9d5-197c-427c-ae1e-3ea4af2166a5", 00:17:23.991 "strip_size_kb": 0, 00:17:23.991 "state": "online", 00:17:23.991 "raid_level": "raid1", 00:17:23.991 "superblock": true, 00:17:23.991 "num_base_bdevs": 2, 00:17:23.991 "num_base_bdevs_discovered": 1, 00:17:23.991 "num_base_bdevs_operational": 1, 00:17:23.991 "base_bdevs_list": [ 00:17:23.991 { 00:17:23.991 "name": null, 00:17:23.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.991 "is_configured": false, 00:17:23.991 "data_offset": 0, 00:17:23.991 "data_size": 7936 00:17:23.991 }, 00:17:23.991 { 00:17:23.991 "name": "BaseBdev2", 00:17:23.991 "uuid": "eefb1d1c-476f-5651-b743-74d0d5b2448c", 00:17:23.991 "is_configured": true, 00:17:23.991 "data_offset": 256, 00:17:23.991 "data_size": 7936 00:17:23.991 } 00:17:23.991 ] 00:17:23.991 }' 00:17:23.991 03:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.991 03:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:23.991 03:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.991 03:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:23.991 03:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:23.991 03:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.991 03:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.991 03:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.991 03:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:23.991 03:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.991 03:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.991 [2024-11-21 03:26:11.510471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:23.991 [2024-11-21 03:26:11.510567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.991 [2024-11-21 03:26:11.510591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:23.991 [2024-11-21 03:26:11.510600] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.991 [2024-11-21 03:26:11.510776] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.991 [2024-11-21 03:26:11.510787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:23.991 [2024-11-21 03:26:11.510846] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:23.991 [2024-11-21 03:26:11.510859] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:23.992 [2024-11-21 03:26:11.510871] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:23.992 [2024-11-21 03:26:11.510880] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:23.992 BaseBdev1 00:17:23.992 03:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.992 03:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:25.373 03:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:25.373 03:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.373 03:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.373 03:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.373 03:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.373 03:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:25.373 03:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.373 03:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.373 03:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.373 03:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.373 03:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.373 03:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.373 03:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.373 03:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.373 03:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.373 03:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.373 "name": "raid_bdev1", 00:17:25.373 "uuid": "76c5e9d5-197c-427c-ae1e-3ea4af2166a5", 00:17:25.373 "strip_size_kb": 0, 00:17:25.373 "state": "online", 00:17:25.373 "raid_level": "raid1", 00:17:25.373 "superblock": true, 00:17:25.373 "num_base_bdevs": 2, 00:17:25.373 "num_base_bdevs_discovered": 1, 00:17:25.373 "num_base_bdevs_operational": 1, 00:17:25.373 "base_bdevs_list": [ 00:17:25.373 { 00:17:25.373 "name": null, 00:17:25.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.373 "is_configured": false, 00:17:25.373 "data_offset": 0, 00:17:25.373 "data_size": 7936 00:17:25.373 }, 00:17:25.373 { 00:17:25.373 "name": "BaseBdev2", 00:17:25.373 "uuid": "eefb1d1c-476f-5651-b743-74d0d5b2448c", 00:17:25.373 "is_configured": true, 00:17:25.373 "data_offset": 256, 00:17:25.373 "data_size": 7936 00:17:25.373 } 00:17:25.373 ] 00:17:25.373 }' 00:17:25.373 03:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.373 03:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.634 03:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:25.634 03:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.634 03:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:25.634 03:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:25.634 03:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.634 03:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.634 03:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.634 03:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.634 03:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.634 03:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.634 03:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.634 "name": "raid_bdev1", 00:17:25.634 "uuid": "76c5e9d5-197c-427c-ae1e-3ea4af2166a5", 00:17:25.634 "strip_size_kb": 0, 00:17:25.634 "state": "online", 00:17:25.634 "raid_level": "raid1", 00:17:25.634 "superblock": true, 00:17:25.634 "num_base_bdevs": 2, 00:17:25.634 "num_base_bdevs_discovered": 1, 00:17:25.634 "num_base_bdevs_operational": 1, 00:17:25.634 "base_bdevs_list": [ 00:17:25.634 { 00:17:25.634 "name": null, 00:17:25.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.634 "is_configured": false, 00:17:25.634 "data_offset": 0, 00:17:25.634 "data_size": 7936 00:17:25.634 }, 00:17:25.634 { 00:17:25.634 "name": "BaseBdev2", 00:17:25.634 "uuid": "eefb1d1c-476f-5651-b743-74d0d5b2448c", 00:17:25.634 "is_configured": true, 00:17:25.634 "data_offset": 256, 00:17:25.634 "data_size": 7936 00:17:25.634 } 00:17:25.634 ] 00:17:25.634 }' 00:17:25.634 03:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.634 03:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:25.634 03:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.634 03:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:25.634 03:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:25.634 03:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:17:25.634 03:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:25.634 03:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:25.634 03:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.634 03:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:25.634 03:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.634 03:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:25.634 03:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.634 03:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.634 [2024-11-21 03:26:13.114914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:25.634 [2024-11-21 03:26:13.115060] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:25.634 [2024-11-21 03:26:13.115074] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:25.634 request: 00:17:25.634 { 00:17:25.634 "base_bdev": "BaseBdev1", 00:17:25.634 "raid_bdev": "raid_bdev1", 00:17:25.634 "method": "bdev_raid_add_base_bdev", 00:17:25.634 "req_id": 1 00:17:25.634 } 00:17:25.634 Got JSON-RPC error response 00:17:25.634 response: 00:17:25.634 { 00:17:25.634 "code": -22, 00:17:25.634 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:25.634 } 00:17:25.634 03:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:25.634 03:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:17:25.634 03:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:25.634 03:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:25.634 03:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:25.634 03:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:26.575 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:26.575 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.575 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.575 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.575 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.575 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:26.575 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.575 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.575 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.575 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.575 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.575 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.835 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.835 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.835 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.835 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.835 "name": "raid_bdev1", 00:17:26.835 "uuid": "76c5e9d5-197c-427c-ae1e-3ea4af2166a5", 00:17:26.835 "strip_size_kb": 0, 00:17:26.835 "state": "online", 00:17:26.835 "raid_level": "raid1", 00:17:26.835 "superblock": true, 00:17:26.835 "num_base_bdevs": 2, 00:17:26.835 "num_base_bdevs_discovered": 1, 00:17:26.835 "num_base_bdevs_operational": 1, 00:17:26.835 "base_bdevs_list": [ 00:17:26.835 { 00:17:26.835 "name": null, 00:17:26.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.835 "is_configured": false, 00:17:26.835 "data_offset": 0, 00:17:26.835 "data_size": 7936 00:17:26.835 }, 00:17:26.835 { 00:17:26.835 "name": "BaseBdev2", 00:17:26.835 "uuid": "eefb1d1c-476f-5651-b743-74d0d5b2448c", 00:17:26.835 "is_configured": true, 00:17:26.835 "data_offset": 256, 00:17:26.835 "data_size": 7936 00:17:26.835 } 00:17:26.835 ] 00:17:26.835 }' 00:17:26.835 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.835 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.095 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:27.095 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.095 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:27.095 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:27.095 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.095 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.095 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.095 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.095 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.095 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.095 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.095 "name": "raid_bdev1", 00:17:27.095 "uuid": "76c5e9d5-197c-427c-ae1e-3ea4af2166a5", 00:17:27.095 "strip_size_kb": 0, 00:17:27.095 "state": "online", 00:17:27.095 "raid_level": "raid1", 00:17:27.095 "superblock": true, 00:17:27.095 "num_base_bdevs": 2, 00:17:27.095 "num_base_bdevs_discovered": 1, 00:17:27.095 "num_base_bdevs_operational": 1, 00:17:27.095 "base_bdevs_list": [ 00:17:27.095 { 00:17:27.095 "name": null, 00:17:27.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.095 "is_configured": false, 00:17:27.095 "data_offset": 0, 00:17:27.095 "data_size": 7936 00:17:27.095 }, 00:17:27.095 { 00:17:27.095 "name": "BaseBdev2", 00:17:27.095 "uuid": "eefb1d1c-476f-5651-b743-74d0d5b2448c", 00:17:27.095 "is_configured": true, 00:17:27.095 "data_offset": 256, 00:17:27.095 "data_size": 7936 00:17:27.095 } 00:17:27.095 ] 00:17:27.095 }' 00:17:27.095 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.355 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:27.355 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.355 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:27.355 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 100147 00:17:27.355 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 100147 ']' 00:17:27.355 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 100147 00:17:27.355 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:27.355 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:27.355 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100147 00:17:27.355 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:27.355 killing process with pid 100147 00:17:27.355 Received shutdown signal, test time was about 60.000000 seconds 00:17:27.355 00:17:27.355 Latency(us) 00:17:27.355 [2024-11-21T03:26:14.921Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.355 [2024-11-21T03:26:14.921Z] =================================================================================================================== 00:17:27.355 [2024-11-21T03:26:14.921Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:27.356 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:27.356 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100147' 00:17:27.356 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 100147 00:17:27.356 [2024-11-21 03:26:14.741270] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:27.356 [2024-11-21 03:26:14.741375] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:27.356 [2024-11-21 03:26:14.741422] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:27.356 [2024-11-21 03:26:14.741433] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:27.356 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 100147 00:17:27.356 [2024-11-21 03:26:14.774312] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:27.616 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:17:27.616 00:17:27.616 real 0m18.249s 00:17:27.616 user 0m24.261s 00:17:27.616 sys 0m2.667s 00:17:27.616 ************************************ 00:17:27.616 END TEST raid_rebuild_test_sb_md_separate 00:17:27.616 ************************************ 00:17:27.616 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:27.616 03:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.616 03:26:15 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:17:27.616 03:26:15 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:17:27.616 03:26:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:27.616 03:26:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:27.616 03:26:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:27.616 ************************************ 00:17:27.616 START TEST raid_state_function_test_sb_md_interleaved 00:17:27.616 ************************************ 00:17:27.616 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:27.616 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:27.616 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:27.616 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:27.616 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:27.616 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:27.616 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:27.616 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:27.616 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:27.616 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:27.616 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:27.616 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:27.616 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:27.616 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:27.616 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:27.616 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:27.616 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:27.616 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:27.616 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:27.616 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:27.616 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:27.616 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:27.616 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:27.616 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=100821 00:17:27.616 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:27.616 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 100821' 00:17:27.616 Process raid pid: 100821 00:17:27.616 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 100821 00:17:27.616 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 100821 ']' 00:17:27.616 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.616 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:27.616 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.616 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:27.616 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:27.616 [2024-11-21 03:26:15.149489] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:17:27.616 [2024-11-21 03:26:15.149648] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.877 [2024-11-21 03:26:15.286465] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:27.877 [2024-11-21 03:26:15.323973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.877 [2024-11-21 03:26:15.350594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.877 [2024-11-21 03:26:15.393972] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:27.877 [2024-11-21 03:26:15.394113] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:28.447 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:28.447 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:28.447 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:28.447 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.447 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:28.447 [2024-11-21 03:26:15.989046] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:28.447 [2024-11-21 03:26:15.989098] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:28.447 [2024-11-21 03:26:15.989109] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:28.447 [2024-11-21 03:26:15.989117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:28.447 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.447 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:28.447 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:28.447 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:28.447 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:28.447 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:28.447 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:28.447 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.447 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.447 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.447 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.447 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.447 03:26:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.447 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.447 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:28.709 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.709 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.710 "name": "Existed_Raid", 00:17:28.710 "uuid": "9eb387e3-3157-457b-8238-df8a4eaf1627", 00:17:28.710 "strip_size_kb": 0, 00:17:28.710 "state": "configuring", 00:17:28.710 "raid_level": "raid1", 00:17:28.710 "superblock": true, 00:17:28.710 "num_base_bdevs": 2, 00:17:28.710 "num_base_bdevs_discovered": 0, 00:17:28.710 "num_base_bdevs_operational": 2, 00:17:28.710 "base_bdevs_list": [ 00:17:28.710 { 00:17:28.710 "name": "BaseBdev1", 00:17:28.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.710 "is_configured": false, 00:17:28.710 "data_offset": 0, 00:17:28.710 "data_size": 0 00:17:28.710 }, 00:17:28.710 { 00:17:28.710 "name": "BaseBdev2", 00:17:28.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.710 "is_configured": false, 00:17:28.710 "data_offset": 0, 00:17:28.710 "data_size": 0 00:17:28.710 } 00:17:28.710 ] 00:17:28.710 }' 00:17:28.710 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.710 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:28.970 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:28.970 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.970 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:28.970 [2024-11-21 03:26:16.461065] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:28.970 [2024-11-21 03:26:16.461152] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:17:28.970 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.970 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:28.970 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.970 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:28.970 [2024-11-21 03:26:16.473094] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:28.970 [2024-11-21 03:26:16.473171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:28.970 [2024-11-21 03:26:16.473198] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:28.970 [2024-11-21 03:26:16.473218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:28.970 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.970 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:17:28.970 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.970 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:28.970 [2024-11-21 03:26:16.494077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:28.970 BaseBdev1 00:17:28.970 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.970 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:28.970 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:28.970 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:28.970 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:17:28.970 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:28.970 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:28.970 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:28.970 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.970 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:28.970 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.970 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:28.970 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.970 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:28.970 [ 00:17:28.970 { 00:17:28.970 "name": "BaseBdev1", 00:17:28.970 "aliases": [ 00:17:28.970 "eabada60-7928-4c4d-b3b1-61d4c1e96c05" 00:17:28.970 ], 00:17:28.970 "product_name": "Malloc disk", 00:17:28.970 "block_size": 4128, 00:17:28.970 "num_blocks": 8192, 00:17:28.970 "uuid": "eabada60-7928-4c4d-b3b1-61d4c1e96c05", 00:17:28.970 "md_size": 32, 00:17:28.970 "md_interleave": true, 00:17:28.970 "dif_type": 0, 00:17:28.970 "assigned_rate_limits": { 00:17:28.970 "rw_ios_per_sec": 0, 00:17:28.970 "rw_mbytes_per_sec": 0, 00:17:28.970 "r_mbytes_per_sec": 0, 00:17:28.970 "w_mbytes_per_sec": 0 00:17:28.970 }, 00:17:28.970 "claimed": true, 00:17:28.970 "claim_type": "exclusive_write", 00:17:28.970 "zoned": false, 00:17:28.970 "supported_io_types": { 00:17:28.970 "read": true, 00:17:28.970 "write": true, 00:17:28.970 "unmap": true, 00:17:28.970 "flush": true, 00:17:28.970 "reset": true, 00:17:28.970 "nvme_admin": false, 00:17:28.970 "nvme_io": false, 00:17:28.970 "nvme_io_md": false, 00:17:28.970 "write_zeroes": true, 00:17:28.970 "zcopy": true, 00:17:28.970 "get_zone_info": false, 00:17:28.970 "zone_management": false, 00:17:28.970 "zone_append": false, 00:17:28.970 "compare": false, 00:17:28.970 "compare_and_write": false, 00:17:28.970 "abort": true, 00:17:28.970 "seek_hole": false, 00:17:28.970 "seek_data": false, 00:17:28.970 "copy": true, 00:17:28.970 "nvme_iov_md": false 00:17:28.970 }, 00:17:28.970 "memory_domains": [ 00:17:28.970 { 00:17:28.970 "dma_device_id": "system", 00:17:28.970 "dma_device_type": 1 00:17:28.970 }, 00:17:28.970 { 00:17:28.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.970 "dma_device_type": 2 00:17:28.970 } 00:17:28.970 ], 00:17:28.970 "driver_specific": {} 00:17:28.970 } 00:17:28.970 ] 00:17:28.970 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.970 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:17:29.230 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:29.230 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:29.230 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:29.230 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:29.230 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:29.230 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:29.230 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.230 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.230 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.230 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.230 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.230 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.230 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.230 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:29.230 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.230 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.230 "name": "Existed_Raid", 00:17:29.230 "uuid": "3c641d76-bea3-4b19-a07d-da2af89ee3c8", 00:17:29.230 "strip_size_kb": 0, 00:17:29.230 "state": "configuring", 00:17:29.230 "raid_level": "raid1", 00:17:29.230 "superblock": true, 00:17:29.230 "num_base_bdevs": 2, 00:17:29.230 "num_base_bdevs_discovered": 1, 00:17:29.230 "num_base_bdevs_operational": 2, 00:17:29.230 "base_bdevs_list": [ 00:17:29.230 { 00:17:29.230 "name": "BaseBdev1", 00:17:29.230 "uuid": "eabada60-7928-4c4d-b3b1-61d4c1e96c05", 00:17:29.230 "is_configured": true, 00:17:29.230 "data_offset": 256, 00:17:29.230 "data_size": 7936 00:17:29.230 }, 00:17:29.230 { 00:17:29.230 "name": "BaseBdev2", 00:17:29.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.230 "is_configured": false, 00:17:29.230 "data_offset": 0, 00:17:29.230 "data_size": 0 00:17:29.230 } 00:17:29.230 ] 00:17:29.230 }' 00:17:29.230 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.230 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:29.490 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:29.490 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.490 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:29.490 [2024-11-21 03:26:16.950227] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:29.490 [2024-11-21 03:26:16.950326] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:29.490 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.490 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:29.491 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.491 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:29.491 [2024-11-21 03:26:16.962294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:29.491 [2024-11-21 03:26:16.964073] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:29.491 [2024-11-21 03:26:16.964156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:29.491 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.491 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:29.491 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:29.491 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:29.491 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:29.491 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:29.491 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:29.491 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:29.491 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:29.491 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.491 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.491 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.491 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.491 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.491 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.491 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.491 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:29.491 03:26:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.491 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.491 "name": "Existed_Raid", 00:17:29.491 "uuid": "3f9a3a63-92c5-4d34-9589-bbf69c1e5d4e", 00:17:29.491 "strip_size_kb": 0, 00:17:29.491 "state": "configuring", 00:17:29.491 "raid_level": "raid1", 00:17:29.491 "superblock": true, 00:17:29.491 "num_base_bdevs": 2, 00:17:29.491 "num_base_bdevs_discovered": 1, 00:17:29.491 "num_base_bdevs_operational": 2, 00:17:29.491 "base_bdevs_list": [ 00:17:29.491 { 00:17:29.491 "name": "BaseBdev1", 00:17:29.491 "uuid": "eabada60-7928-4c4d-b3b1-61d4c1e96c05", 00:17:29.491 "is_configured": true, 00:17:29.491 "data_offset": 256, 00:17:29.491 "data_size": 7936 00:17:29.491 }, 00:17:29.491 { 00:17:29.491 "name": "BaseBdev2", 00:17:29.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.491 "is_configured": false, 00:17:29.491 "data_offset": 0, 00:17:29.491 "data_size": 0 00:17:29.491 } 00:17:29.491 ] 00:17:29.491 }' 00:17:29.491 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.491 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.061 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:17:30.061 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.061 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.061 [2024-11-21 03:26:17.453535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:30.061 [2024-11-21 03:26:17.453788] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:30.061 [2024-11-21 03:26:17.453835] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:30.061 [2024-11-21 03:26:17.453965] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:30.061 [2024-11-21 03:26:17.454096] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:30.061 [2024-11-21 03:26:17.454136] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:17:30.061 [2024-11-21 03:26:17.454238] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.061 BaseBdev2 00:17:30.061 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.061 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:30.061 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:30.061 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:30.061 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:17:30.061 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:30.061 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:30.061 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:30.061 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.061 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.061 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.061 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:30.061 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.061 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.061 [ 00:17:30.061 { 00:17:30.061 "name": "BaseBdev2", 00:17:30.061 "aliases": [ 00:17:30.062 "074f5d78-1538-4b93-8813-9e9296a6f3c5" 00:17:30.062 ], 00:17:30.062 "product_name": "Malloc disk", 00:17:30.062 "block_size": 4128, 00:17:30.062 "num_blocks": 8192, 00:17:30.062 "uuid": "074f5d78-1538-4b93-8813-9e9296a6f3c5", 00:17:30.062 "md_size": 32, 00:17:30.062 "md_interleave": true, 00:17:30.062 "dif_type": 0, 00:17:30.062 "assigned_rate_limits": { 00:17:30.062 "rw_ios_per_sec": 0, 00:17:30.062 "rw_mbytes_per_sec": 0, 00:17:30.062 "r_mbytes_per_sec": 0, 00:17:30.062 "w_mbytes_per_sec": 0 00:17:30.062 }, 00:17:30.062 "claimed": true, 00:17:30.062 "claim_type": "exclusive_write", 00:17:30.062 "zoned": false, 00:17:30.062 "supported_io_types": { 00:17:30.062 "read": true, 00:17:30.062 "write": true, 00:17:30.062 "unmap": true, 00:17:30.062 "flush": true, 00:17:30.062 "reset": true, 00:17:30.062 "nvme_admin": false, 00:17:30.062 "nvme_io": false, 00:17:30.062 "nvme_io_md": false, 00:17:30.062 "write_zeroes": true, 00:17:30.062 "zcopy": true, 00:17:30.062 "get_zone_info": false, 00:17:30.062 "zone_management": false, 00:17:30.062 "zone_append": false, 00:17:30.062 "compare": false, 00:17:30.062 "compare_and_write": false, 00:17:30.062 "abort": true, 00:17:30.062 "seek_hole": false, 00:17:30.062 "seek_data": false, 00:17:30.062 "copy": true, 00:17:30.062 "nvme_iov_md": false 00:17:30.062 }, 00:17:30.062 "memory_domains": [ 00:17:30.062 { 00:17:30.062 "dma_device_id": "system", 00:17:30.062 "dma_device_type": 1 00:17:30.062 }, 00:17:30.062 { 00:17:30.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.062 "dma_device_type": 2 00:17:30.062 } 00:17:30.062 ], 00:17:30.062 "driver_specific": {} 00:17:30.062 } 00:17:30.062 ] 00:17:30.062 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.062 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:17:30.062 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:30.062 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:30.062 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:30.062 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.062 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.062 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.062 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.062 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:30.062 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.062 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.062 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.062 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.062 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.062 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.062 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.062 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.062 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.062 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.062 "name": "Existed_Raid", 00:17:30.062 "uuid": "3f9a3a63-92c5-4d34-9589-bbf69c1e5d4e", 00:17:30.062 "strip_size_kb": 0, 00:17:30.062 "state": "online", 00:17:30.062 "raid_level": "raid1", 00:17:30.062 "superblock": true, 00:17:30.062 "num_base_bdevs": 2, 00:17:30.062 "num_base_bdevs_discovered": 2, 00:17:30.062 "num_base_bdevs_operational": 2, 00:17:30.062 "base_bdevs_list": [ 00:17:30.062 { 00:17:30.062 "name": "BaseBdev1", 00:17:30.062 "uuid": "eabada60-7928-4c4d-b3b1-61d4c1e96c05", 00:17:30.062 "is_configured": true, 00:17:30.062 "data_offset": 256, 00:17:30.062 "data_size": 7936 00:17:30.062 }, 00:17:30.062 { 00:17:30.062 "name": "BaseBdev2", 00:17:30.062 "uuid": "074f5d78-1538-4b93-8813-9e9296a6f3c5", 00:17:30.062 "is_configured": true, 00:17:30.062 "data_offset": 256, 00:17:30.062 "data_size": 7936 00:17:30.062 } 00:17:30.062 ] 00:17:30.062 }' 00:17:30.062 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.062 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.666 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:30.666 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:30.666 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:30.666 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:30.666 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:30.666 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:30.666 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:30.666 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:30.666 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.666 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.666 [2024-11-21 03:26:17.953988] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:30.666 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.666 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:30.666 "name": "Existed_Raid", 00:17:30.666 "aliases": [ 00:17:30.666 "3f9a3a63-92c5-4d34-9589-bbf69c1e5d4e" 00:17:30.666 ], 00:17:30.666 "product_name": "Raid Volume", 00:17:30.666 "block_size": 4128, 00:17:30.666 "num_blocks": 7936, 00:17:30.666 "uuid": "3f9a3a63-92c5-4d34-9589-bbf69c1e5d4e", 00:17:30.666 "md_size": 32, 00:17:30.666 "md_interleave": true, 00:17:30.666 "dif_type": 0, 00:17:30.666 "assigned_rate_limits": { 00:17:30.666 "rw_ios_per_sec": 0, 00:17:30.666 "rw_mbytes_per_sec": 0, 00:17:30.666 "r_mbytes_per_sec": 0, 00:17:30.666 "w_mbytes_per_sec": 0 00:17:30.666 }, 00:17:30.666 "claimed": false, 00:17:30.666 "zoned": false, 00:17:30.666 "supported_io_types": { 00:17:30.666 "read": true, 00:17:30.666 "write": true, 00:17:30.666 "unmap": false, 00:17:30.666 "flush": false, 00:17:30.666 "reset": true, 00:17:30.666 "nvme_admin": false, 00:17:30.666 "nvme_io": false, 00:17:30.666 "nvme_io_md": false, 00:17:30.666 "write_zeroes": true, 00:17:30.666 "zcopy": false, 00:17:30.666 "get_zone_info": false, 00:17:30.666 "zone_management": false, 00:17:30.666 "zone_append": false, 00:17:30.666 "compare": false, 00:17:30.666 "compare_and_write": false, 00:17:30.666 "abort": false, 00:17:30.666 "seek_hole": false, 00:17:30.666 "seek_data": false, 00:17:30.666 "copy": false, 00:17:30.666 "nvme_iov_md": false 00:17:30.666 }, 00:17:30.666 "memory_domains": [ 00:17:30.666 { 00:17:30.666 "dma_device_id": "system", 00:17:30.666 "dma_device_type": 1 00:17:30.666 }, 00:17:30.666 { 00:17:30.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.666 "dma_device_type": 2 00:17:30.666 }, 00:17:30.666 { 00:17:30.666 "dma_device_id": "system", 00:17:30.666 "dma_device_type": 1 00:17:30.666 }, 00:17:30.666 { 00:17:30.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.666 "dma_device_type": 2 00:17:30.666 } 00:17:30.666 ], 00:17:30.666 "driver_specific": { 00:17:30.666 "raid": { 00:17:30.666 "uuid": "3f9a3a63-92c5-4d34-9589-bbf69c1e5d4e", 00:17:30.666 "strip_size_kb": 0, 00:17:30.666 "state": "online", 00:17:30.666 "raid_level": "raid1", 00:17:30.666 "superblock": true, 00:17:30.666 "num_base_bdevs": 2, 00:17:30.666 "num_base_bdevs_discovered": 2, 00:17:30.666 "num_base_bdevs_operational": 2, 00:17:30.666 "base_bdevs_list": [ 00:17:30.666 { 00:17:30.666 "name": "BaseBdev1", 00:17:30.666 "uuid": "eabada60-7928-4c4d-b3b1-61d4c1e96c05", 00:17:30.666 "is_configured": true, 00:17:30.666 "data_offset": 256, 00:17:30.666 "data_size": 7936 00:17:30.666 }, 00:17:30.666 { 00:17:30.666 "name": "BaseBdev2", 00:17:30.666 "uuid": "074f5d78-1538-4b93-8813-9e9296a6f3c5", 00:17:30.666 "is_configured": true, 00:17:30.666 "data_offset": 256, 00:17:30.666 "data_size": 7936 00:17:30.666 } 00:17:30.666 ] 00:17:30.666 } 00:17:30.666 } 00:17:30.666 }' 00:17:30.666 03:26:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:30.666 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:30.666 BaseBdev2' 00:17:30.666 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:30.666 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:30.666 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:30.666 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:30.666 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:30.666 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.666 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.666 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.666 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:30.666 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:30.666 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:30.666 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:30.666 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.666 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.666 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:30.666 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.666 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:30.666 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:30.666 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:30.666 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.666 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.666 [2024-11-21 03:26:18.169821] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:30.666 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.667 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:30.667 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:30.667 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:30.667 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:17:30.667 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:30.667 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:30.667 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.667 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.667 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.667 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.667 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:30.667 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.667 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.667 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.667 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.667 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.667 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.667 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.667 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.667 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.926 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.926 "name": "Existed_Raid", 00:17:30.926 "uuid": "3f9a3a63-92c5-4d34-9589-bbf69c1e5d4e", 00:17:30.926 "strip_size_kb": 0, 00:17:30.926 "state": "online", 00:17:30.926 "raid_level": "raid1", 00:17:30.926 "superblock": true, 00:17:30.926 "num_base_bdevs": 2, 00:17:30.926 "num_base_bdevs_discovered": 1, 00:17:30.926 "num_base_bdevs_operational": 1, 00:17:30.926 "base_bdevs_list": [ 00:17:30.926 { 00:17:30.926 "name": null, 00:17:30.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.926 "is_configured": false, 00:17:30.926 "data_offset": 0, 00:17:30.926 "data_size": 7936 00:17:30.926 }, 00:17:30.926 { 00:17:30.926 "name": "BaseBdev2", 00:17:30.926 "uuid": "074f5d78-1538-4b93-8813-9e9296a6f3c5", 00:17:30.926 "is_configured": true, 00:17:30.926 "data_offset": 256, 00:17:30.926 "data_size": 7936 00:17:30.926 } 00:17:30.926 ] 00:17:30.926 }' 00:17:30.926 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.926 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.186 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:31.186 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:31.186 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.186 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:31.186 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.186 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.186 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.187 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:31.187 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:31.187 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:31.187 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.187 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.187 [2024-11-21 03:26:18.701783] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:31.187 [2024-11-21 03:26:18.701933] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:31.187 [2024-11-21 03:26:18.714025] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:31.187 [2024-11-21 03:26:18.714141] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:31.187 [2024-11-21 03:26:18.714183] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:17:31.187 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.187 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:31.187 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:31.187 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.187 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:31.187 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.187 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.187 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.447 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:31.447 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:31.447 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:31.447 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 100821 00:17:31.447 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 100821 ']' 00:17:31.447 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 100821 00:17:31.447 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:17:31.447 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:31.447 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100821 00:17:31.447 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:31.447 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:31.447 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100821' 00:17:31.447 killing process with pid 100821 00:17:31.447 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 100821 00:17:31.447 [2024-11-21 03:26:18.811860] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:31.447 03:26:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 100821 00:17:31.447 [2024-11-21 03:26:18.812819] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:31.707 03:26:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:17:31.707 00:17:31.707 real 0m3.968s 00:17:31.707 user 0m6.252s 00:17:31.707 sys 0m0.876s 00:17:31.707 03:26:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:31.707 ************************************ 00:17:31.707 END TEST raid_state_function_test_sb_md_interleaved 00:17:31.707 ************************************ 00:17:31.707 03:26:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.707 03:26:19 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:17:31.708 03:26:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:31.708 03:26:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:31.708 03:26:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:31.708 ************************************ 00:17:31.708 START TEST raid_superblock_test_md_interleaved 00:17:31.708 ************************************ 00:17:31.708 03:26:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:31.708 03:26:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:31.708 03:26:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:31.708 03:26:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:31.708 03:26:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:31.708 03:26:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:31.708 03:26:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:31.708 03:26:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:31.708 03:26:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:31.708 03:26:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:31.708 03:26:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:31.708 03:26:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:31.708 03:26:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:31.708 03:26:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:31.708 03:26:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:31.708 03:26:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:31.708 03:26:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=101056 00:17:31.708 03:26:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:31.708 03:26:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 101056 00:17:31.708 03:26:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 101056 ']' 00:17:31.708 03:26:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.708 03:26:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:31.708 03:26:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.708 03:26:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:31.708 03:26:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.708 [2024-11-21 03:26:19.191544] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:17:31.708 [2024-11-21 03:26:19.191759] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101056 ] 00:17:31.968 [2024-11-21 03:26:19.325342] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:31.968 [2024-11-21 03:26:19.363491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.968 [2024-11-21 03:26:19.389933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.968 [2024-11-21 03:26:19.433196] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:31.968 [2024-11-21 03:26:19.433317] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:32.538 03:26:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:32.538 03:26:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:32.538 03:26:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:32.538 03:26:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:32.538 03:26:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.538 malloc1 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.538 [2024-11-21 03:26:20.025103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:32.538 [2024-11-21 03:26:20.025245] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.538 [2024-11-21 03:26:20.025283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:32.538 [2024-11-21 03:26:20.025310] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.538 [2024-11-21 03:26:20.027203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.538 [2024-11-21 03:26:20.027274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:32.538 pt1 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.538 malloc2 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.538 [2024-11-21 03:26:20.057839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:32.538 [2024-11-21 03:26:20.057945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.538 [2024-11-21 03:26:20.057978] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:32.538 [2024-11-21 03:26:20.058004] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.538 [2024-11-21 03:26:20.059821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.538 [2024-11-21 03:26:20.059894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:32.538 pt2 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.538 [2024-11-21 03:26:20.069868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:32.538 [2024-11-21 03:26:20.071590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:32.538 [2024-11-21 03:26:20.071733] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:17:32.538 [2024-11-21 03:26:20.071746] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:32.538 [2024-11-21 03:26:20.071813] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:32.538 [2024-11-21 03:26:20.071877] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:17:32.538 [2024-11-21 03:26:20.071889] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:17:32.538 [2024-11-21 03:26:20.071951] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.538 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.798 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.798 "name": "raid_bdev1", 00:17:32.798 "uuid": "b14b7f32-5349-4fb4-96bc-5504344d6f09", 00:17:32.798 "strip_size_kb": 0, 00:17:32.798 "state": "online", 00:17:32.798 "raid_level": "raid1", 00:17:32.798 "superblock": true, 00:17:32.798 "num_base_bdevs": 2, 00:17:32.798 "num_base_bdevs_discovered": 2, 00:17:32.798 "num_base_bdevs_operational": 2, 00:17:32.798 "base_bdevs_list": [ 00:17:32.798 { 00:17:32.798 "name": "pt1", 00:17:32.798 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:32.798 "is_configured": true, 00:17:32.798 "data_offset": 256, 00:17:32.798 "data_size": 7936 00:17:32.798 }, 00:17:32.798 { 00:17:32.798 "name": "pt2", 00:17:32.798 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:32.798 "is_configured": true, 00:17:32.798 "data_offset": 256, 00:17:32.798 "data_size": 7936 00:17:32.798 } 00:17:32.798 ] 00:17:32.798 }' 00:17:32.798 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.798 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.058 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:33.058 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:33.058 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:33.058 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:33.058 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:33.058 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:33.058 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:33.058 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:33.058 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.058 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.058 [2024-11-21 03:26:20.562308] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:33.058 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.058 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:33.058 "name": "raid_bdev1", 00:17:33.058 "aliases": [ 00:17:33.058 "b14b7f32-5349-4fb4-96bc-5504344d6f09" 00:17:33.059 ], 00:17:33.059 "product_name": "Raid Volume", 00:17:33.059 "block_size": 4128, 00:17:33.059 "num_blocks": 7936, 00:17:33.059 "uuid": "b14b7f32-5349-4fb4-96bc-5504344d6f09", 00:17:33.059 "md_size": 32, 00:17:33.059 "md_interleave": true, 00:17:33.059 "dif_type": 0, 00:17:33.059 "assigned_rate_limits": { 00:17:33.059 "rw_ios_per_sec": 0, 00:17:33.059 "rw_mbytes_per_sec": 0, 00:17:33.059 "r_mbytes_per_sec": 0, 00:17:33.059 "w_mbytes_per_sec": 0 00:17:33.059 }, 00:17:33.059 "claimed": false, 00:17:33.059 "zoned": false, 00:17:33.059 "supported_io_types": { 00:17:33.059 "read": true, 00:17:33.059 "write": true, 00:17:33.059 "unmap": false, 00:17:33.059 "flush": false, 00:17:33.059 "reset": true, 00:17:33.059 "nvme_admin": false, 00:17:33.059 "nvme_io": false, 00:17:33.059 "nvme_io_md": false, 00:17:33.059 "write_zeroes": true, 00:17:33.059 "zcopy": false, 00:17:33.059 "get_zone_info": false, 00:17:33.059 "zone_management": false, 00:17:33.059 "zone_append": false, 00:17:33.059 "compare": false, 00:17:33.059 "compare_and_write": false, 00:17:33.059 "abort": false, 00:17:33.059 "seek_hole": false, 00:17:33.059 "seek_data": false, 00:17:33.059 "copy": false, 00:17:33.059 "nvme_iov_md": false 00:17:33.059 }, 00:17:33.059 "memory_domains": [ 00:17:33.059 { 00:17:33.059 "dma_device_id": "system", 00:17:33.059 "dma_device_type": 1 00:17:33.059 }, 00:17:33.059 { 00:17:33.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.059 "dma_device_type": 2 00:17:33.059 }, 00:17:33.059 { 00:17:33.059 "dma_device_id": "system", 00:17:33.059 "dma_device_type": 1 00:17:33.059 }, 00:17:33.059 { 00:17:33.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.059 "dma_device_type": 2 00:17:33.059 } 00:17:33.059 ], 00:17:33.059 "driver_specific": { 00:17:33.059 "raid": { 00:17:33.059 "uuid": "b14b7f32-5349-4fb4-96bc-5504344d6f09", 00:17:33.059 "strip_size_kb": 0, 00:17:33.059 "state": "online", 00:17:33.059 "raid_level": "raid1", 00:17:33.059 "superblock": true, 00:17:33.059 "num_base_bdevs": 2, 00:17:33.059 "num_base_bdevs_discovered": 2, 00:17:33.059 "num_base_bdevs_operational": 2, 00:17:33.059 "base_bdevs_list": [ 00:17:33.059 { 00:17:33.059 "name": "pt1", 00:17:33.059 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:33.059 "is_configured": true, 00:17:33.059 "data_offset": 256, 00:17:33.059 "data_size": 7936 00:17:33.059 }, 00:17:33.059 { 00:17:33.059 "name": "pt2", 00:17:33.059 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:33.059 "is_configured": true, 00:17:33.059 "data_offset": 256, 00:17:33.059 "data_size": 7936 00:17:33.059 } 00:17:33.059 ] 00:17:33.059 } 00:17:33.059 } 00:17:33.059 }' 00:17:33.059 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:33.319 pt2' 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.319 [2024-11-21 03:26:20.762255] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b14b7f32-5349-4fb4-96bc-5504344d6f09 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z b14b7f32-5349-4fb4-96bc-5504344d6f09 ']' 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.319 [2024-11-21 03:26:20.806051] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:33.319 [2024-11-21 03:26:20.806073] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:33.319 [2024-11-21 03:26:20.806144] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:33.319 [2024-11-21 03:26:20.806207] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:33.319 [2024-11-21 03:26:20.806226] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.319 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.579 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.579 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:33.579 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.579 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:33.579 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.579 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.580 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:33.580 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:33.580 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:17:33.580 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:33.580 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:33.580 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:33.580 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:33.580 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:33.580 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:33.580 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.580 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.580 [2024-11-21 03:26:20.946098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:33.580 [2024-11-21 03:26:20.947961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:33.580 [2024-11-21 03:26:20.948076] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:33.580 [2024-11-21 03:26:20.948168] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:33.580 [2024-11-21 03:26:20.948219] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:33.580 [2024-11-21 03:26:20.948248] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:17:33.580 request: 00:17:33.580 { 00:17:33.580 "name": "raid_bdev1", 00:17:33.580 "raid_level": "raid1", 00:17:33.580 "base_bdevs": [ 00:17:33.580 "malloc1", 00:17:33.580 "malloc2" 00:17:33.580 ], 00:17:33.580 "superblock": false, 00:17:33.580 "method": "bdev_raid_create", 00:17:33.580 "req_id": 1 00:17:33.580 } 00:17:33.580 Got JSON-RPC error response 00:17:33.580 response: 00:17:33.580 { 00:17:33.580 "code": -17, 00:17:33.580 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:33.580 } 00:17:33.580 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:33.580 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:17:33.580 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:33.580 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:33.580 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:33.580 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.580 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.580 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:33.580 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.580 03:26:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.580 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:33.580 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:33.580 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:33.580 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.580 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.580 [2024-11-21 03:26:21.014089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:33.580 [2024-11-21 03:26:21.014180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.580 [2024-11-21 03:26:21.014219] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:33.580 [2024-11-21 03:26:21.014252] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.580 [2024-11-21 03:26:21.016128] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.580 [2024-11-21 03:26:21.016210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:33.580 [2024-11-21 03:26:21.016272] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:33.580 [2024-11-21 03:26:21.016340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:33.580 pt1 00:17:33.580 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.580 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:33.580 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.580 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:33.580 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:33.580 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:33.580 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:33.580 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.580 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.580 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.580 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.580 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.580 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.580 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.580 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.580 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.580 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.580 "name": "raid_bdev1", 00:17:33.580 "uuid": "b14b7f32-5349-4fb4-96bc-5504344d6f09", 00:17:33.580 "strip_size_kb": 0, 00:17:33.580 "state": "configuring", 00:17:33.580 "raid_level": "raid1", 00:17:33.580 "superblock": true, 00:17:33.580 "num_base_bdevs": 2, 00:17:33.580 "num_base_bdevs_discovered": 1, 00:17:33.580 "num_base_bdevs_operational": 2, 00:17:33.580 "base_bdevs_list": [ 00:17:33.580 { 00:17:33.580 "name": "pt1", 00:17:33.580 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:33.580 "is_configured": true, 00:17:33.580 "data_offset": 256, 00:17:33.580 "data_size": 7936 00:17:33.580 }, 00:17:33.580 { 00:17:33.580 "name": null, 00:17:33.580 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:33.580 "is_configured": false, 00:17:33.580 "data_offset": 256, 00:17:33.580 "data_size": 7936 00:17:33.580 } 00:17:33.580 ] 00:17:33.580 }' 00:17:33.580 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.580 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.150 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:34.151 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:34.151 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:34.151 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:34.151 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.151 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.151 [2024-11-21 03:26:21.450212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:34.151 [2024-11-21 03:26:21.450267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.151 [2024-11-21 03:26:21.450285] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:34.151 [2024-11-21 03:26:21.450295] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.151 [2024-11-21 03:26:21.450406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.151 [2024-11-21 03:26:21.450419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:34.151 [2024-11-21 03:26:21.450456] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:34.151 [2024-11-21 03:26:21.450474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:34.151 [2024-11-21 03:26:21.450533] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:34.151 [2024-11-21 03:26:21.450543] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:34.151 [2024-11-21 03:26:21.450605] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:34.151 [2024-11-21 03:26:21.450668] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:34.151 [2024-11-21 03:26:21.450675] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:34.151 [2024-11-21 03:26:21.450726] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.151 pt2 00:17:34.151 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.151 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:34.151 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:34.151 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:34.151 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.151 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.151 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.151 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.151 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:34.151 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.151 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.151 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.151 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.151 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.151 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.151 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.151 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.151 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.151 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.151 "name": "raid_bdev1", 00:17:34.151 "uuid": "b14b7f32-5349-4fb4-96bc-5504344d6f09", 00:17:34.151 "strip_size_kb": 0, 00:17:34.151 "state": "online", 00:17:34.151 "raid_level": "raid1", 00:17:34.151 "superblock": true, 00:17:34.151 "num_base_bdevs": 2, 00:17:34.151 "num_base_bdevs_discovered": 2, 00:17:34.151 "num_base_bdevs_operational": 2, 00:17:34.151 "base_bdevs_list": [ 00:17:34.151 { 00:17:34.151 "name": "pt1", 00:17:34.151 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:34.151 "is_configured": true, 00:17:34.151 "data_offset": 256, 00:17:34.151 "data_size": 7936 00:17:34.151 }, 00:17:34.151 { 00:17:34.151 "name": "pt2", 00:17:34.151 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:34.151 "is_configured": true, 00:17:34.151 "data_offset": 256, 00:17:34.151 "data_size": 7936 00:17:34.151 } 00:17:34.151 ] 00:17:34.151 }' 00:17:34.151 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.151 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.410 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:34.410 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:34.410 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:34.410 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:34.410 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:34.410 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:34.410 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:34.410 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.410 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.410 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:34.410 [2024-11-21 03:26:21.902557] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:34.411 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.411 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:34.411 "name": "raid_bdev1", 00:17:34.411 "aliases": [ 00:17:34.411 "b14b7f32-5349-4fb4-96bc-5504344d6f09" 00:17:34.411 ], 00:17:34.411 "product_name": "Raid Volume", 00:17:34.411 "block_size": 4128, 00:17:34.411 "num_blocks": 7936, 00:17:34.411 "uuid": "b14b7f32-5349-4fb4-96bc-5504344d6f09", 00:17:34.411 "md_size": 32, 00:17:34.411 "md_interleave": true, 00:17:34.411 "dif_type": 0, 00:17:34.411 "assigned_rate_limits": { 00:17:34.411 "rw_ios_per_sec": 0, 00:17:34.411 "rw_mbytes_per_sec": 0, 00:17:34.411 "r_mbytes_per_sec": 0, 00:17:34.411 "w_mbytes_per_sec": 0 00:17:34.411 }, 00:17:34.411 "claimed": false, 00:17:34.411 "zoned": false, 00:17:34.411 "supported_io_types": { 00:17:34.411 "read": true, 00:17:34.411 "write": true, 00:17:34.411 "unmap": false, 00:17:34.411 "flush": false, 00:17:34.411 "reset": true, 00:17:34.411 "nvme_admin": false, 00:17:34.411 "nvme_io": false, 00:17:34.411 "nvme_io_md": false, 00:17:34.411 "write_zeroes": true, 00:17:34.411 "zcopy": false, 00:17:34.411 "get_zone_info": false, 00:17:34.411 "zone_management": false, 00:17:34.411 "zone_append": false, 00:17:34.411 "compare": false, 00:17:34.411 "compare_and_write": false, 00:17:34.411 "abort": false, 00:17:34.411 "seek_hole": false, 00:17:34.411 "seek_data": false, 00:17:34.411 "copy": false, 00:17:34.411 "nvme_iov_md": false 00:17:34.411 }, 00:17:34.411 "memory_domains": [ 00:17:34.411 { 00:17:34.411 "dma_device_id": "system", 00:17:34.411 "dma_device_type": 1 00:17:34.411 }, 00:17:34.411 { 00:17:34.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.411 "dma_device_type": 2 00:17:34.411 }, 00:17:34.411 { 00:17:34.411 "dma_device_id": "system", 00:17:34.411 "dma_device_type": 1 00:17:34.411 }, 00:17:34.411 { 00:17:34.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.411 "dma_device_type": 2 00:17:34.411 } 00:17:34.411 ], 00:17:34.411 "driver_specific": { 00:17:34.411 "raid": { 00:17:34.411 "uuid": "b14b7f32-5349-4fb4-96bc-5504344d6f09", 00:17:34.411 "strip_size_kb": 0, 00:17:34.411 "state": "online", 00:17:34.411 "raid_level": "raid1", 00:17:34.411 "superblock": true, 00:17:34.411 "num_base_bdevs": 2, 00:17:34.411 "num_base_bdevs_discovered": 2, 00:17:34.411 "num_base_bdevs_operational": 2, 00:17:34.411 "base_bdevs_list": [ 00:17:34.411 { 00:17:34.411 "name": "pt1", 00:17:34.411 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:34.411 "is_configured": true, 00:17:34.411 "data_offset": 256, 00:17:34.411 "data_size": 7936 00:17:34.411 }, 00:17:34.411 { 00:17:34.411 "name": "pt2", 00:17:34.411 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:34.411 "is_configured": true, 00:17:34.411 "data_offset": 256, 00:17:34.411 "data_size": 7936 00:17:34.411 } 00:17:34.411 ] 00:17:34.411 } 00:17:34.411 } 00:17:34.411 }' 00:17:34.411 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:34.671 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:34.671 pt2' 00:17:34.671 03:26:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.671 [2024-11-21 03:26:22.114585] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' b14b7f32-5349-4fb4-96bc-5504344d6f09 '!=' b14b7f32-5349-4fb4-96bc-5504344d6f09 ']' 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.671 [2024-11-21 03:26:22.162402] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.671 "name": "raid_bdev1", 00:17:34.671 "uuid": "b14b7f32-5349-4fb4-96bc-5504344d6f09", 00:17:34.671 "strip_size_kb": 0, 00:17:34.671 "state": "online", 00:17:34.671 "raid_level": "raid1", 00:17:34.671 "superblock": true, 00:17:34.671 "num_base_bdevs": 2, 00:17:34.671 "num_base_bdevs_discovered": 1, 00:17:34.671 "num_base_bdevs_operational": 1, 00:17:34.671 "base_bdevs_list": [ 00:17:34.671 { 00:17:34.671 "name": null, 00:17:34.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.671 "is_configured": false, 00:17:34.671 "data_offset": 0, 00:17:34.671 "data_size": 7936 00:17:34.671 }, 00:17:34.671 { 00:17:34.671 "name": "pt2", 00:17:34.671 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:34.671 "is_configured": true, 00:17:34.671 "data_offset": 256, 00:17:34.671 "data_size": 7936 00:17:34.671 } 00:17:34.671 ] 00:17:34.671 }' 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.671 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.241 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:35.241 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.241 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.241 [2024-11-21 03:26:22.594493] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:35.241 [2024-11-21 03:26:22.594574] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:35.241 [2024-11-21 03:26:22.594642] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:35.241 [2024-11-21 03:26:22.594685] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:35.241 [2024-11-21 03:26:22.594696] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:35.241 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.241 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:35.241 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.241 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.241 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.241 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.241 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:35.241 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:35.241 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:35.241 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:35.241 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:35.241 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.241 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.241 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.241 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:35.241 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:35.241 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:35.241 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:35.241 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:17:35.241 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:35.241 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.241 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.241 [2024-11-21 03:26:22.670512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:35.241 [2024-11-21 03:26:22.670615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.241 [2024-11-21 03:26:22.670647] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:35.241 [2024-11-21 03:26:22.670675] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.241 [2024-11-21 03:26:22.672557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.241 [2024-11-21 03:26:22.672632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:35.241 [2024-11-21 03:26:22.672697] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:35.241 [2024-11-21 03:26:22.672746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:35.241 [2024-11-21 03:26:22.672813] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:35.241 [2024-11-21 03:26:22.672838] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:35.241 [2024-11-21 03:26:22.672940] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:17:35.242 [2024-11-21 03:26:22.673051] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:35.242 [2024-11-21 03:26:22.673088] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:35.242 [2024-11-21 03:26:22.673177] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.242 pt2 00:17:35.242 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.242 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:35.242 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.242 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.242 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.242 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.242 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:35.242 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.242 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.242 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.242 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.242 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.242 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.242 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.242 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.242 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.242 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.242 "name": "raid_bdev1", 00:17:35.242 "uuid": "b14b7f32-5349-4fb4-96bc-5504344d6f09", 00:17:35.242 "strip_size_kb": 0, 00:17:35.242 "state": "online", 00:17:35.242 "raid_level": "raid1", 00:17:35.242 "superblock": true, 00:17:35.242 "num_base_bdevs": 2, 00:17:35.242 "num_base_bdevs_discovered": 1, 00:17:35.242 "num_base_bdevs_operational": 1, 00:17:35.242 "base_bdevs_list": [ 00:17:35.242 { 00:17:35.242 "name": null, 00:17:35.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.242 "is_configured": false, 00:17:35.242 "data_offset": 256, 00:17:35.242 "data_size": 7936 00:17:35.242 }, 00:17:35.242 { 00:17:35.242 "name": "pt2", 00:17:35.242 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:35.242 "is_configured": true, 00:17:35.242 "data_offset": 256, 00:17:35.242 "data_size": 7936 00:17:35.242 } 00:17:35.242 ] 00:17:35.242 }' 00:17:35.242 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.242 03:26:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.812 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:35.812 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.812 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.812 [2024-11-21 03:26:23.150618] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:35.812 [2024-11-21 03:26:23.150691] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:35.812 [2024-11-21 03:26:23.150743] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:35.812 [2024-11-21 03:26:23.150779] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:35.812 [2024-11-21 03:26:23.150788] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:35.812 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.812 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:35.812 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.812 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.812 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.812 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.812 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:35.813 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:35.813 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:35.813 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:35.813 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.813 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.813 [2024-11-21 03:26:23.194642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:35.813 [2024-11-21 03:26:23.194732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.813 [2024-11-21 03:26:23.194765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:35.813 [2024-11-21 03:26:23.194792] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.813 [2024-11-21 03:26:23.196660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.813 [2024-11-21 03:26:23.196727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:35.813 [2024-11-21 03:26:23.196789] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:35.813 [2024-11-21 03:26:23.196831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:35.813 [2024-11-21 03:26:23.196919] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:35.813 [2024-11-21 03:26:23.196965] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:35.813 [2024-11-21 03:26:23.197013] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:17:35.813 [2024-11-21 03:26:23.197089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:35.813 [2024-11-21 03:26:23.197198] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:17:35.813 [2024-11-21 03:26:23.197238] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:35.813 [2024-11-21 03:26:23.197310] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:35.813 [2024-11-21 03:26:23.197400] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:17:35.813 [2024-11-21 03:26:23.197441] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:17:35.813 [2024-11-21 03:26:23.197533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.813 pt1 00:17:35.813 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.813 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:35.813 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:35.813 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.813 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.813 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.813 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.813 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:35.813 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.813 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.813 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.813 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.813 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.813 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.813 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.813 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.813 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.813 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.813 "name": "raid_bdev1", 00:17:35.813 "uuid": "b14b7f32-5349-4fb4-96bc-5504344d6f09", 00:17:35.813 "strip_size_kb": 0, 00:17:35.813 "state": "online", 00:17:35.813 "raid_level": "raid1", 00:17:35.813 "superblock": true, 00:17:35.813 "num_base_bdevs": 2, 00:17:35.813 "num_base_bdevs_discovered": 1, 00:17:35.813 "num_base_bdevs_operational": 1, 00:17:35.813 "base_bdevs_list": [ 00:17:35.813 { 00:17:35.813 "name": null, 00:17:35.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.813 "is_configured": false, 00:17:35.813 "data_offset": 256, 00:17:35.813 "data_size": 7936 00:17:35.813 }, 00:17:35.813 { 00:17:35.813 "name": "pt2", 00:17:35.813 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:35.813 "is_configured": true, 00:17:35.813 "data_offset": 256, 00:17:35.813 "data_size": 7936 00:17:35.813 } 00:17:35.813 ] 00:17:35.813 }' 00:17:35.813 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.813 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.383 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:36.383 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:36.383 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.383 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.383 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.383 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:36.383 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:36.383 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:36.383 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.383 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.383 [2024-11-21 03:26:23.751048] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:36.383 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.383 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' b14b7f32-5349-4fb4-96bc-5504344d6f09 '!=' b14b7f32-5349-4fb4-96bc-5504344d6f09 ']' 00:17:36.383 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 101056 00:17:36.383 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 101056 ']' 00:17:36.383 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 101056 00:17:36.383 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:17:36.383 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:36.383 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101056 00:17:36.383 killing process with pid 101056 00:17:36.383 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:36.383 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:36.383 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101056' 00:17:36.383 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 101056 00:17:36.383 [2024-11-21 03:26:23.817384] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:36.383 [2024-11-21 03:26:23.817453] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:36.383 [2024-11-21 03:26:23.817493] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:36.383 [2024-11-21 03:26:23.817502] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:17:36.383 03:26:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 101056 00:17:36.383 [2024-11-21 03:26:23.840661] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:36.644 03:26:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:17:36.644 00:17:36.644 real 0m4.957s 00:17:36.644 user 0m8.109s 00:17:36.644 sys 0m1.098s 00:17:36.644 03:26:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:36.644 ************************************ 00:17:36.644 END TEST raid_superblock_test_md_interleaved 00:17:36.644 ************************************ 00:17:36.644 03:26:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.644 03:26:24 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:17:36.644 03:26:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:36.644 03:26:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:36.644 03:26:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:36.644 ************************************ 00:17:36.644 START TEST raid_rebuild_test_sb_md_interleaved 00:17:36.644 ************************************ 00:17:36.644 03:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:17:36.644 03:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:36.644 03:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:36.644 03:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:36.644 03:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:36.644 03:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:17:36.644 03:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:36.644 03:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:36.644 03:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:36.644 03:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:36.644 03:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:36.644 03:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:36.644 03:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:36.644 03:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:36.644 03:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:36.644 03:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:36.644 03:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:36.644 03:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:36.644 03:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:36.644 03:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:36.644 03:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:36.644 03:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:36.644 03:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:36.644 03:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:36.644 03:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:36.644 03:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=101371 00:17:36.644 03:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:36.644 03:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 101371 00:17:36.645 03:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 101371 ']' 00:17:36.645 03:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.645 03:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:36.645 03:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.645 03:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:36.645 03:26:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.905 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:36.905 Zero copy mechanism will not be used. 00:17:36.905 [2024-11-21 03:26:24.255947] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:17:36.905 [2024-11-21 03:26:24.256089] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101371 ] 00:17:36.905 [2024-11-21 03:26:24.396380] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:36.906 [2024-11-21 03:26:24.435384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.906 [2024-11-21 03:26:24.463036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.166 [2024-11-21 03:26:24.506829] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:37.166 [2024-11-21 03:26:24.506862] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:37.737 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:37.737 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:37.737 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:37.737 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:17:37.737 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.737 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.737 BaseBdev1_malloc 00:17:37.737 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.737 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:37.737 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.737 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.737 [2024-11-21 03:26:25.078696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:37.737 [2024-11-21 03:26:25.078767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.737 [2024-11-21 03:26:25.078793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:37.737 [2024-11-21 03:26:25.078844] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.737 [2024-11-21 03:26:25.080711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.737 [2024-11-21 03:26:25.080750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:37.737 BaseBdev1 00:17:37.737 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.737 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:37.737 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:17:37.737 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.737 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.737 BaseBdev2_malloc 00:17:37.737 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.737 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:37.737 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.737 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.737 [2024-11-21 03:26:25.107499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:37.737 [2024-11-21 03:26:25.107557] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.737 [2024-11-21 03:26:25.107577] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:37.737 [2024-11-21 03:26:25.107588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.737 [2024-11-21 03:26:25.109387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.737 [2024-11-21 03:26:25.109428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:37.737 BaseBdev2 00:17:37.737 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.737 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:17:37.737 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.737 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.737 spare_malloc 00:17:37.737 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.737 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:37.737 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.737 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.737 spare_delay 00:17:37.737 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.737 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:37.737 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.737 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.737 [2024-11-21 03:26:25.148280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:37.737 [2024-11-21 03:26:25.148333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.737 [2024-11-21 03:26:25.148351] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:37.737 [2024-11-21 03:26:25.148363] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.737 [2024-11-21 03:26:25.150152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.737 [2024-11-21 03:26:25.150269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:37.737 spare 00:17:37.738 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.738 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:37.738 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.738 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.738 [2024-11-21 03:26:25.160335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:37.738 [2024-11-21 03:26:25.162116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:37.738 [2024-11-21 03:26:25.162265] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:17:37.738 [2024-11-21 03:26:25.162284] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:37.738 [2024-11-21 03:26:25.162357] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:37.738 [2024-11-21 03:26:25.162441] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:17:37.738 [2024-11-21 03:26:25.162461] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:17:37.738 [2024-11-21 03:26:25.162520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.738 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.738 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:37.738 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.738 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.738 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.738 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.738 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:37.738 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.738 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.738 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.738 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.738 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.738 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.738 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.738 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.738 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.738 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.738 "name": "raid_bdev1", 00:17:37.738 "uuid": "625d19c4-0352-4f00-a88d-b3df96af79d7", 00:17:37.738 "strip_size_kb": 0, 00:17:37.738 "state": "online", 00:17:37.738 "raid_level": "raid1", 00:17:37.738 "superblock": true, 00:17:37.738 "num_base_bdevs": 2, 00:17:37.738 "num_base_bdevs_discovered": 2, 00:17:37.738 "num_base_bdevs_operational": 2, 00:17:37.738 "base_bdevs_list": [ 00:17:37.738 { 00:17:37.738 "name": "BaseBdev1", 00:17:37.738 "uuid": "7a85cb6c-e464-5eeb-be6a-f2d8376e824d", 00:17:37.738 "is_configured": true, 00:17:37.738 "data_offset": 256, 00:17:37.738 "data_size": 7936 00:17:37.738 }, 00:17:37.738 { 00:17:37.738 "name": "BaseBdev2", 00:17:37.738 "uuid": "b88af239-7b49-5d93-8916-13860411dc01", 00:17:37.738 "is_configured": true, 00:17:37.738 "data_offset": 256, 00:17:37.738 "data_size": 7936 00:17:37.738 } 00:17:37.738 ] 00:17:37.738 }' 00:17:37.738 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.738 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.308 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:38.308 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.308 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.308 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:38.308 [2024-11-21 03:26:25.608708] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:38.308 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.308 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:38.308 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.308 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:38.308 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.308 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.308 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.308 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:38.308 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:38.308 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:17:38.308 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:38.308 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.308 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.308 [2024-11-21 03:26:25.700426] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:38.308 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.308 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:38.308 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.308 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.308 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:38.308 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:38.308 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:38.308 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.308 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.308 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.308 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.308 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.308 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.308 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.308 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.309 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.309 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.309 "name": "raid_bdev1", 00:17:38.309 "uuid": "625d19c4-0352-4f00-a88d-b3df96af79d7", 00:17:38.309 "strip_size_kb": 0, 00:17:38.309 "state": "online", 00:17:38.309 "raid_level": "raid1", 00:17:38.309 "superblock": true, 00:17:38.309 "num_base_bdevs": 2, 00:17:38.309 "num_base_bdevs_discovered": 1, 00:17:38.309 "num_base_bdevs_operational": 1, 00:17:38.309 "base_bdevs_list": [ 00:17:38.309 { 00:17:38.309 "name": null, 00:17:38.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.309 "is_configured": false, 00:17:38.309 "data_offset": 0, 00:17:38.309 "data_size": 7936 00:17:38.309 }, 00:17:38.309 { 00:17:38.309 "name": "BaseBdev2", 00:17:38.309 "uuid": "b88af239-7b49-5d93-8916-13860411dc01", 00:17:38.309 "is_configured": true, 00:17:38.309 "data_offset": 256, 00:17:38.309 "data_size": 7936 00:17:38.309 } 00:17:38.309 ] 00:17:38.309 }' 00:17:38.309 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.309 03:26:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.568 03:26:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:38.568 03:26:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.569 03:26:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.569 [2024-11-21 03:26:26.076587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:38.569 [2024-11-21 03:26:26.093148] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:17:38.569 03:26:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.569 03:26:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:38.569 [2024-11-21 03:26:26.099438] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:39.952 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.952 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.952 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:39.952 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:39.952 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.952 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.952 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.952 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.952 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:39.952 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.952 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.952 "name": "raid_bdev1", 00:17:39.952 "uuid": "625d19c4-0352-4f00-a88d-b3df96af79d7", 00:17:39.952 "strip_size_kb": 0, 00:17:39.952 "state": "online", 00:17:39.952 "raid_level": "raid1", 00:17:39.952 "superblock": true, 00:17:39.952 "num_base_bdevs": 2, 00:17:39.952 "num_base_bdevs_discovered": 2, 00:17:39.952 "num_base_bdevs_operational": 2, 00:17:39.952 "process": { 00:17:39.952 "type": "rebuild", 00:17:39.952 "target": "spare", 00:17:39.952 "progress": { 00:17:39.952 "blocks": 2560, 00:17:39.952 "percent": 32 00:17:39.952 } 00:17:39.952 }, 00:17:39.952 "base_bdevs_list": [ 00:17:39.952 { 00:17:39.952 "name": "spare", 00:17:39.952 "uuid": "94dee115-1a42-5b52-bfec-e897c899afd0", 00:17:39.952 "is_configured": true, 00:17:39.952 "data_offset": 256, 00:17:39.952 "data_size": 7936 00:17:39.952 }, 00:17:39.952 { 00:17:39.952 "name": "BaseBdev2", 00:17:39.952 "uuid": "b88af239-7b49-5d93-8916-13860411dc01", 00:17:39.952 "is_configured": true, 00:17:39.952 "data_offset": 256, 00:17:39.952 "data_size": 7936 00:17:39.952 } 00:17:39.952 ] 00:17:39.952 }' 00:17:39.952 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.952 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:39.952 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.952 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:39.952 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:39.952 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.952 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:39.952 [2024-11-21 03:26:27.232252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:39.952 [2024-11-21 03:26:27.306760] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:39.952 [2024-11-21 03:26:27.306824] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.952 [2024-11-21 03:26:27.306838] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:39.952 [2024-11-21 03:26:27.306867] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:39.952 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.952 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:39.952 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:39.952 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.952 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:39.952 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:39.952 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:39.952 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.952 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.952 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.952 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.952 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.952 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.952 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.952 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:39.953 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.953 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.953 "name": "raid_bdev1", 00:17:39.953 "uuid": "625d19c4-0352-4f00-a88d-b3df96af79d7", 00:17:39.953 "strip_size_kb": 0, 00:17:39.953 "state": "online", 00:17:39.953 "raid_level": "raid1", 00:17:39.953 "superblock": true, 00:17:39.953 "num_base_bdevs": 2, 00:17:39.953 "num_base_bdevs_discovered": 1, 00:17:39.953 "num_base_bdevs_operational": 1, 00:17:39.953 "base_bdevs_list": [ 00:17:39.953 { 00:17:39.953 "name": null, 00:17:39.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.953 "is_configured": false, 00:17:39.953 "data_offset": 0, 00:17:39.953 "data_size": 7936 00:17:39.953 }, 00:17:39.953 { 00:17:39.953 "name": "BaseBdev2", 00:17:39.953 "uuid": "b88af239-7b49-5d93-8916-13860411dc01", 00:17:39.953 "is_configured": true, 00:17:39.953 "data_offset": 256, 00:17:39.953 "data_size": 7936 00:17:39.953 } 00:17:39.953 ] 00:17:39.953 }' 00:17:39.953 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.953 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:40.523 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:40.523 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.523 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:40.523 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:40.523 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.523 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.523 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.523 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:40.523 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.523 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.523 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.523 "name": "raid_bdev1", 00:17:40.523 "uuid": "625d19c4-0352-4f00-a88d-b3df96af79d7", 00:17:40.523 "strip_size_kb": 0, 00:17:40.523 "state": "online", 00:17:40.523 "raid_level": "raid1", 00:17:40.523 "superblock": true, 00:17:40.523 "num_base_bdevs": 2, 00:17:40.523 "num_base_bdevs_discovered": 1, 00:17:40.523 "num_base_bdevs_operational": 1, 00:17:40.523 "base_bdevs_list": [ 00:17:40.523 { 00:17:40.523 "name": null, 00:17:40.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.523 "is_configured": false, 00:17:40.523 "data_offset": 0, 00:17:40.523 "data_size": 7936 00:17:40.523 }, 00:17:40.523 { 00:17:40.523 "name": "BaseBdev2", 00:17:40.523 "uuid": "b88af239-7b49-5d93-8916-13860411dc01", 00:17:40.523 "is_configured": true, 00:17:40.523 "data_offset": 256, 00:17:40.523 "data_size": 7936 00:17:40.523 } 00:17:40.523 ] 00:17:40.523 }' 00:17:40.523 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.523 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:40.523 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.523 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:40.523 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:40.523 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.523 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:40.523 [2024-11-21 03:26:27.911101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:40.523 [2024-11-21 03:26:27.914698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:40.523 [2024-11-21 03:26:27.916538] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:40.523 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.523 03:26:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:41.461 03:26:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.461 03:26:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.461 03:26:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.461 03:26:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.461 03:26:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.461 03:26:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.461 03:26:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.461 03:26:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.461 03:26:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:41.461 03:26:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.461 03:26:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.461 "name": "raid_bdev1", 00:17:41.461 "uuid": "625d19c4-0352-4f00-a88d-b3df96af79d7", 00:17:41.461 "strip_size_kb": 0, 00:17:41.461 "state": "online", 00:17:41.461 "raid_level": "raid1", 00:17:41.461 "superblock": true, 00:17:41.461 "num_base_bdevs": 2, 00:17:41.461 "num_base_bdevs_discovered": 2, 00:17:41.461 "num_base_bdevs_operational": 2, 00:17:41.461 "process": { 00:17:41.461 "type": "rebuild", 00:17:41.461 "target": "spare", 00:17:41.461 "progress": { 00:17:41.461 "blocks": 2560, 00:17:41.461 "percent": 32 00:17:41.461 } 00:17:41.461 }, 00:17:41.461 "base_bdevs_list": [ 00:17:41.461 { 00:17:41.461 "name": "spare", 00:17:41.461 "uuid": "94dee115-1a42-5b52-bfec-e897c899afd0", 00:17:41.461 "is_configured": true, 00:17:41.461 "data_offset": 256, 00:17:41.461 "data_size": 7936 00:17:41.461 }, 00:17:41.461 { 00:17:41.461 "name": "BaseBdev2", 00:17:41.461 "uuid": "b88af239-7b49-5d93-8916-13860411dc01", 00:17:41.461 "is_configured": true, 00:17:41.461 "data_offset": 256, 00:17:41.461 "data_size": 7936 00:17:41.461 } 00:17:41.461 ] 00:17:41.461 }' 00:17:41.461 03:26:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.461 03:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.461 03:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.721 03:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.721 03:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:41.721 03:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:41.721 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:41.721 03:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:41.721 03:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:41.721 03:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:41.721 03:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=624 00:17:41.721 03:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:41.721 03:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.721 03:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.721 03:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.721 03:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.721 03:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.721 03:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.721 03:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.721 03:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.721 03:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:41.721 03:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.721 03:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.721 "name": "raid_bdev1", 00:17:41.721 "uuid": "625d19c4-0352-4f00-a88d-b3df96af79d7", 00:17:41.721 "strip_size_kb": 0, 00:17:41.721 "state": "online", 00:17:41.721 "raid_level": "raid1", 00:17:41.721 "superblock": true, 00:17:41.721 "num_base_bdevs": 2, 00:17:41.721 "num_base_bdevs_discovered": 2, 00:17:41.721 "num_base_bdevs_operational": 2, 00:17:41.721 "process": { 00:17:41.721 "type": "rebuild", 00:17:41.721 "target": "spare", 00:17:41.721 "progress": { 00:17:41.721 "blocks": 2816, 00:17:41.721 "percent": 35 00:17:41.721 } 00:17:41.721 }, 00:17:41.721 "base_bdevs_list": [ 00:17:41.721 { 00:17:41.721 "name": "spare", 00:17:41.721 "uuid": "94dee115-1a42-5b52-bfec-e897c899afd0", 00:17:41.721 "is_configured": true, 00:17:41.721 "data_offset": 256, 00:17:41.721 "data_size": 7936 00:17:41.721 }, 00:17:41.721 { 00:17:41.721 "name": "BaseBdev2", 00:17:41.721 "uuid": "b88af239-7b49-5d93-8916-13860411dc01", 00:17:41.721 "is_configured": true, 00:17:41.721 "data_offset": 256, 00:17:41.721 "data_size": 7936 00:17:41.721 } 00:17:41.721 ] 00:17:41.721 }' 00:17:41.721 03:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.721 03:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.721 03:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.721 03:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.721 03:26:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:43.103 03:26:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:43.103 03:26:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.103 03:26:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.103 03:26:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.103 03:26:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.103 03:26:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.103 03:26:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.103 03:26:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.103 03:26:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.103 03:26:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:43.103 03:26:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.103 03:26:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.103 "name": "raid_bdev1", 00:17:43.103 "uuid": "625d19c4-0352-4f00-a88d-b3df96af79d7", 00:17:43.103 "strip_size_kb": 0, 00:17:43.103 "state": "online", 00:17:43.103 "raid_level": "raid1", 00:17:43.103 "superblock": true, 00:17:43.103 "num_base_bdevs": 2, 00:17:43.103 "num_base_bdevs_discovered": 2, 00:17:43.103 "num_base_bdevs_operational": 2, 00:17:43.103 "process": { 00:17:43.103 "type": "rebuild", 00:17:43.103 "target": "spare", 00:17:43.103 "progress": { 00:17:43.103 "blocks": 5888, 00:17:43.103 "percent": 74 00:17:43.103 } 00:17:43.103 }, 00:17:43.103 "base_bdevs_list": [ 00:17:43.103 { 00:17:43.103 "name": "spare", 00:17:43.103 "uuid": "94dee115-1a42-5b52-bfec-e897c899afd0", 00:17:43.103 "is_configured": true, 00:17:43.103 "data_offset": 256, 00:17:43.103 "data_size": 7936 00:17:43.103 }, 00:17:43.103 { 00:17:43.103 "name": "BaseBdev2", 00:17:43.103 "uuid": "b88af239-7b49-5d93-8916-13860411dc01", 00:17:43.103 "is_configured": true, 00:17:43.103 "data_offset": 256, 00:17:43.103 "data_size": 7936 00:17:43.103 } 00:17:43.103 ] 00:17:43.103 }' 00:17:43.103 03:26:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.103 03:26:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:43.103 03:26:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.103 03:26:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:43.103 03:26:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:43.673 [2024-11-21 03:26:31.032599] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:43.673 [2024-11-21 03:26:31.032678] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:43.673 [2024-11-21 03:26:31.032785] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.933 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:43.933 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.933 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.933 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.933 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.933 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.933 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.933 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.933 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.933 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:43.933 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.933 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.933 "name": "raid_bdev1", 00:17:43.933 "uuid": "625d19c4-0352-4f00-a88d-b3df96af79d7", 00:17:43.933 "strip_size_kb": 0, 00:17:43.933 "state": "online", 00:17:43.933 "raid_level": "raid1", 00:17:43.933 "superblock": true, 00:17:43.933 "num_base_bdevs": 2, 00:17:43.933 "num_base_bdevs_discovered": 2, 00:17:43.933 "num_base_bdevs_operational": 2, 00:17:43.933 "base_bdevs_list": [ 00:17:43.933 { 00:17:43.933 "name": "spare", 00:17:43.933 "uuid": "94dee115-1a42-5b52-bfec-e897c899afd0", 00:17:43.933 "is_configured": true, 00:17:43.933 "data_offset": 256, 00:17:43.933 "data_size": 7936 00:17:43.933 }, 00:17:43.933 { 00:17:43.933 "name": "BaseBdev2", 00:17:43.933 "uuid": "b88af239-7b49-5d93-8916-13860411dc01", 00:17:43.933 "is_configured": true, 00:17:43.933 "data_offset": 256, 00:17:43.933 "data_size": 7936 00:17:43.933 } 00:17:43.933 ] 00:17:43.933 }' 00:17:43.933 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.933 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:43.933 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.193 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:44.193 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:17:44.193 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:44.193 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.193 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:44.193 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:44.193 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.193 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.193 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.193 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.193 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.193 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.193 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.193 "name": "raid_bdev1", 00:17:44.193 "uuid": "625d19c4-0352-4f00-a88d-b3df96af79d7", 00:17:44.193 "strip_size_kb": 0, 00:17:44.193 "state": "online", 00:17:44.193 "raid_level": "raid1", 00:17:44.193 "superblock": true, 00:17:44.193 "num_base_bdevs": 2, 00:17:44.193 "num_base_bdevs_discovered": 2, 00:17:44.193 "num_base_bdevs_operational": 2, 00:17:44.193 "base_bdevs_list": [ 00:17:44.193 { 00:17:44.193 "name": "spare", 00:17:44.193 "uuid": "94dee115-1a42-5b52-bfec-e897c899afd0", 00:17:44.193 "is_configured": true, 00:17:44.193 "data_offset": 256, 00:17:44.193 "data_size": 7936 00:17:44.193 }, 00:17:44.193 { 00:17:44.193 "name": "BaseBdev2", 00:17:44.193 "uuid": "b88af239-7b49-5d93-8916-13860411dc01", 00:17:44.193 "is_configured": true, 00:17:44.193 "data_offset": 256, 00:17:44.193 "data_size": 7936 00:17:44.193 } 00:17:44.193 ] 00:17:44.193 }' 00:17:44.193 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.193 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:44.193 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.193 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:44.193 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:44.193 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.193 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.193 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.193 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.193 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:44.193 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.193 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.193 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.193 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.194 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.194 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.194 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.194 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.194 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.194 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.194 "name": "raid_bdev1", 00:17:44.194 "uuid": "625d19c4-0352-4f00-a88d-b3df96af79d7", 00:17:44.194 "strip_size_kb": 0, 00:17:44.194 "state": "online", 00:17:44.194 "raid_level": "raid1", 00:17:44.194 "superblock": true, 00:17:44.194 "num_base_bdevs": 2, 00:17:44.194 "num_base_bdevs_discovered": 2, 00:17:44.194 "num_base_bdevs_operational": 2, 00:17:44.194 "base_bdevs_list": [ 00:17:44.194 { 00:17:44.194 "name": "spare", 00:17:44.194 "uuid": "94dee115-1a42-5b52-bfec-e897c899afd0", 00:17:44.194 "is_configured": true, 00:17:44.194 "data_offset": 256, 00:17:44.194 "data_size": 7936 00:17:44.194 }, 00:17:44.194 { 00:17:44.194 "name": "BaseBdev2", 00:17:44.194 "uuid": "b88af239-7b49-5d93-8916-13860411dc01", 00:17:44.194 "is_configured": true, 00:17:44.194 "data_offset": 256, 00:17:44.194 "data_size": 7936 00:17:44.194 } 00:17:44.194 ] 00:17:44.194 }' 00:17:44.194 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.194 03:26:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.763 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:44.763 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.763 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.763 [2024-11-21 03:26:32.045058] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:44.763 [2024-11-21 03:26:32.045090] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:44.763 [2024-11-21 03:26:32.045184] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:44.763 [2024-11-21 03:26:32.045249] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:44.763 [2024-11-21 03:26:32.045270] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:17:44.763 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.764 [2024-11-21 03:26:32.117074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:44.764 [2024-11-21 03:26:32.117124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.764 [2024-11-21 03:26:32.117146] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:44.764 [2024-11-21 03:26:32.117154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.764 [2024-11-21 03:26:32.119223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.764 [2024-11-21 03:26:32.119259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:44.764 [2024-11-21 03:26:32.119311] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:44.764 [2024-11-21 03:26:32.119348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:44.764 [2024-11-21 03:26:32.119446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:44.764 spare 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.764 [2024-11-21 03:26:32.219509] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:44.764 [2024-11-21 03:26:32.219537] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:44.764 [2024-11-21 03:26:32.219623] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:17:44.764 [2024-11-21 03:26:32.219700] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:44.764 [2024-11-21 03:26:32.219709] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:44.764 [2024-11-21 03:26:32.219777] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.764 "name": "raid_bdev1", 00:17:44.764 "uuid": "625d19c4-0352-4f00-a88d-b3df96af79d7", 00:17:44.764 "strip_size_kb": 0, 00:17:44.764 "state": "online", 00:17:44.764 "raid_level": "raid1", 00:17:44.764 "superblock": true, 00:17:44.764 "num_base_bdevs": 2, 00:17:44.764 "num_base_bdevs_discovered": 2, 00:17:44.764 "num_base_bdevs_operational": 2, 00:17:44.764 "base_bdevs_list": [ 00:17:44.764 { 00:17:44.764 "name": "spare", 00:17:44.764 "uuid": "94dee115-1a42-5b52-bfec-e897c899afd0", 00:17:44.764 "is_configured": true, 00:17:44.764 "data_offset": 256, 00:17:44.764 "data_size": 7936 00:17:44.764 }, 00:17:44.764 { 00:17:44.764 "name": "BaseBdev2", 00:17:44.764 "uuid": "b88af239-7b49-5d93-8916-13860411dc01", 00:17:44.764 "is_configured": true, 00:17:44.764 "data_offset": 256, 00:17:44.764 "data_size": 7936 00:17:44.764 } 00:17:44.764 ] 00:17:44.764 }' 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.764 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:45.334 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:45.334 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.334 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:45.334 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:45.334 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.334 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.334 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.334 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.334 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:45.334 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.334 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.334 "name": "raid_bdev1", 00:17:45.334 "uuid": "625d19c4-0352-4f00-a88d-b3df96af79d7", 00:17:45.334 "strip_size_kb": 0, 00:17:45.334 "state": "online", 00:17:45.334 "raid_level": "raid1", 00:17:45.334 "superblock": true, 00:17:45.334 "num_base_bdevs": 2, 00:17:45.334 "num_base_bdevs_discovered": 2, 00:17:45.334 "num_base_bdevs_operational": 2, 00:17:45.334 "base_bdevs_list": [ 00:17:45.334 { 00:17:45.334 "name": "spare", 00:17:45.334 "uuid": "94dee115-1a42-5b52-bfec-e897c899afd0", 00:17:45.334 "is_configured": true, 00:17:45.334 "data_offset": 256, 00:17:45.334 "data_size": 7936 00:17:45.334 }, 00:17:45.334 { 00:17:45.334 "name": "BaseBdev2", 00:17:45.334 "uuid": "b88af239-7b49-5d93-8916-13860411dc01", 00:17:45.334 "is_configured": true, 00:17:45.334 "data_offset": 256, 00:17:45.334 "data_size": 7936 00:17:45.334 } 00:17:45.334 ] 00:17:45.334 }' 00:17:45.334 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.334 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:45.334 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.334 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:45.334 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.334 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.334 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:45.334 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:45.334 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.334 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:45.334 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:45.334 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.334 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:45.334 [2024-11-21 03:26:32.869307] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:45.334 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.334 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:45.334 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.334 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.334 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.334 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.334 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:45.334 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.335 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.335 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.335 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.335 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.335 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.335 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.335 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:45.594 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.594 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.595 "name": "raid_bdev1", 00:17:45.595 "uuid": "625d19c4-0352-4f00-a88d-b3df96af79d7", 00:17:45.595 "strip_size_kb": 0, 00:17:45.595 "state": "online", 00:17:45.595 "raid_level": "raid1", 00:17:45.595 "superblock": true, 00:17:45.595 "num_base_bdevs": 2, 00:17:45.595 "num_base_bdevs_discovered": 1, 00:17:45.595 "num_base_bdevs_operational": 1, 00:17:45.595 "base_bdevs_list": [ 00:17:45.595 { 00:17:45.595 "name": null, 00:17:45.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.595 "is_configured": false, 00:17:45.595 "data_offset": 0, 00:17:45.595 "data_size": 7936 00:17:45.595 }, 00:17:45.595 { 00:17:45.595 "name": "BaseBdev2", 00:17:45.595 "uuid": "b88af239-7b49-5d93-8916-13860411dc01", 00:17:45.595 "is_configured": true, 00:17:45.595 "data_offset": 256, 00:17:45.595 "data_size": 7936 00:17:45.595 } 00:17:45.595 ] 00:17:45.595 }' 00:17:45.595 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.595 03:26:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:45.854 03:26:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:45.854 03:26:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.854 03:26:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:45.854 [2024-11-21 03:26:33.329448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:45.855 [2024-11-21 03:26:33.329599] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:45.855 [2024-11-21 03:26:33.329622] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:45.855 [2024-11-21 03:26:33.329657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:45.855 [2024-11-21 03:26:33.333230] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:17:45.855 [2024-11-21 03:26:33.335090] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:45.855 03:26:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.855 03:26:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:46.794 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:46.794 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.794 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:46.794 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:46.794 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.794 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.794 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.794 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.794 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:47.054 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.054 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.054 "name": "raid_bdev1", 00:17:47.054 "uuid": "625d19c4-0352-4f00-a88d-b3df96af79d7", 00:17:47.054 "strip_size_kb": 0, 00:17:47.054 "state": "online", 00:17:47.054 "raid_level": "raid1", 00:17:47.054 "superblock": true, 00:17:47.054 "num_base_bdevs": 2, 00:17:47.054 "num_base_bdevs_discovered": 2, 00:17:47.054 "num_base_bdevs_operational": 2, 00:17:47.054 "process": { 00:17:47.054 "type": "rebuild", 00:17:47.054 "target": "spare", 00:17:47.054 "progress": { 00:17:47.054 "blocks": 2560, 00:17:47.054 "percent": 32 00:17:47.054 } 00:17:47.054 }, 00:17:47.054 "base_bdevs_list": [ 00:17:47.054 { 00:17:47.054 "name": "spare", 00:17:47.054 "uuid": "94dee115-1a42-5b52-bfec-e897c899afd0", 00:17:47.054 "is_configured": true, 00:17:47.054 "data_offset": 256, 00:17:47.054 "data_size": 7936 00:17:47.054 }, 00:17:47.054 { 00:17:47.055 "name": "BaseBdev2", 00:17:47.055 "uuid": "b88af239-7b49-5d93-8916-13860411dc01", 00:17:47.055 "is_configured": true, 00:17:47.055 "data_offset": 256, 00:17:47.055 "data_size": 7936 00:17:47.055 } 00:17:47.055 ] 00:17:47.055 }' 00:17:47.055 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.055 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:47.055 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.055 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:47.055 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:47.055 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.055 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:47.055 [2024-11-21 03:26:34.497277] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:47.055 [2024-11-21 03:26:34.541201] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:47.055 [2024-11-21 03:26:34.541354] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.055 [2024-11-21 03:26:34.541371] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:47.055 [2024-11-21 03:26:34.541379] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:47.055 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.055 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:47.055 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.055 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.055 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:47.055 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:47.055 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:47.055 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.055 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.055 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.055 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.055 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.055 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.055 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.055 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:47.055 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.055 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.055 "name": "raid_bdev1", 00:17:47.055 "uuid": "625d19c4-0352-4f00-a88d-b3df96af79d7", 00:17:47.055 "strip_size_kb": 0, 00:17:47.055 "state": "online", 00:17:47.055 "raid_level": "raid1", 00:17:47.055 "superblock": true, 00:17:47.055 "num_base_bdevs": 2, 00:17:47.055 "num_base_bdevs_discovered": 1, 00:17:47.055 "num_base_bdevs_operational": 1, 00:17:47.055 "base_bdevs_list": [ 00:17:47.055 { 00:17:47.055 "name": null, 00:17:47.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.055 "is_configured": false, 00:17:47.055 "data_offset": 0, 00:17:47.055 "data_size": 7936 00:17:47.055 }, 00:17:47.055 { 00:17:47.055 "name": "BaseBdev2", 00:17:47.055 "uuid": "b88af239-7b49-5d93-8916-13860411dc01", 00:17:47.055 "is_configured": true, 00:17:47.055 "data_offset": 256, 00:17:47.055 "data_size": 7936 00:17:47.055 } 00:17:47.055 ] 00:17:47.055 }' 00:17:47.055 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.055 03:26:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:47.625 03:26:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:47.625 03:26:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.625 03:26:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:47.625 [2024-11-21 03:26:35.041411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:47.625 [2024-11-21 03:26:35.041474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.625 [2024-11-21 03:26:35.041499] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:47.625 [2024-11-21 03:26:35.041509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.625 [2024-11-21 03:26:35.041678] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.625 [2024-11-21 03:26:35.041694] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:47.625 [2024-11-21 03:26:35.041745] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:47.625 [2024-11-21 03:26:35.041758] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:47.625 [2024-11-21 03:26:35.041770] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:47.625 [2024-11-21 03:26:35.041792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:47.625 [2024-11-21 03:26:35.045296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:17:47.625 [2024-11-21 03:26:35.047077] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:47.625 spare 00:17:47.625 03:26:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.625 03:26:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:48.592 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:48.592 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.592 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:48.592 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:48.592 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.592 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.592 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.592 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.592 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.592 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.592 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.592 "name": "raid_bdev1", 00:17:48.592 "uuid": "625d19c4-0352-4f00-a88d-b3df96af79d7", 00:17:48.592 "strip_size_kb": 0, 00:17:48.592 "state": "online", 00:17:48.592 "raid_level": "raid1", 00:17:48.592 "superblock": true, 00:17:48.592 "num_base_bdevs": 2, 00:17:48.592 "num_base_bdevs_discovered": 2, 00:17:48.592 "num_base_bdevs_operational": 2, 00:17:48.592 "process": { 00:17:48.592 "type": "rebuild", 00:17:48.592 "target": "spare", 00:17:48.592 "progress": { 00:17:48.592 "blocks": 2560, 00:17:48.592 "percent": 32 00:17:48.592 } 00:17:48.592 }, 00:17:48.592 "base_bdevs_list": [ 00:17:48.592 { 00:17:48.592 "name": "spare", 00:17:48.592 "uuid": "94dee115-1a42-5b52-bfec-e897c899afd0", 00:17:48.592 "is_configured": true, 00:17:48.592 "data_offset": 256, 00:17:48.592 "data_size": 7936 00:17:48.592 }, 00:17:48.592 { 00:17:48.592 "name": "BaseBdev2", 00:17:48.592 "uuid": "b88af239-7b49-5d93-8916-13860411dc01", 00:17:48.592 "is_configured": true, 00:17:48.592 "data_offset": 256, 00:17:48.592 "data_size": 7936 00:17:48.592 } 00:17:48.592 ] 00:17:48.592 }' 00:17:48.592 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.592 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:48.592 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.852 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:48.852 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:48.852 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.852 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.852 [2024-11-21 03:26:36.204376] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:48.852 [2024-11-21 03:26:36.253187] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:48.852 [2024-11-21 03:26:36.253242] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.852 [2024-11-21 03:26:36.253274] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:48.852 [2024-11-21 03:26:36.253281] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:48.852 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.852 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:48.852 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.852 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.852 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:48.852 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:48.852 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:48.852 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.852 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.852 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.852 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.853 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.853 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.853 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.853 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.853 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.853 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.853 "name": "raid_bdev1", 00:17:48.853 "uuid": "625d19c4-0352-4f00-a88d-b3df96af79d7", 00:17:48.853 "strip_size_kb": 0, 00:17:48.853 "state": "online", 00:17:48.853 "raid_level": "raid1", 00:17:48.853 "superblock": true, 00:17:48.853 "num_base_bdevs": 2, 00:17:48.853 "num_base_bdevs_discovered": 1, 00:17:48.853 "num_base_bdevs_operational": 1, 00:17:48.853 "base_bdevs_list": [ 00:17:48.853 { 00:17:48.853 "name": null, 00:17:48.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.853 "is_configured": false, 00:17:48.853 "data_offset": 0, 00:17:48.853 "data_size": 7936 00:17:48.853 }, 00:17:48.853 { 00:17:48.853 "name": "BaseBdev2", 00:17:48.853 "uuid": "b88af239-7b49-5d93-8916-13860411dc01", 00:17:48.853 "is_configured": true, 00:17:48.853 "data_offset": 256, 00:17:48.853 "data_size": 7936 00:17:48.853 } 00:17:48.853 ] 00:17:48.853 }' 00:17:48.853 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.853 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.422 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:49.422 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.422 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:49.422 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:49.422 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.422 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.422 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.422 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.422 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.422 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.422 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.422 "name": "raid_bdev1", 00:17:49.422 "uuid": "625d19c4-0352-4f00-a88d-b3df96af79d7", 00:17:49.422 "strip_size_kb": 0, 00:17:49.422 "state": "online", 00:17:49.422 "raid_level": "raid1", 00:17:49.422 "superblock": true, 00:17:49.422 "num_base_bdevs": 2, 00:17:49.422 "num_base_bdevs_discovered": 1, 00:17:49.422 "num_base_bdevs_operational": 1, 00:17:49.422 "base_bdevs_list": [ 00:17:49.422 { 00:17:49.422 "name": null, 00:17:49.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.422 "is_configured": false, 00:17:49.422 "data_offset": 0, 00:17:49.422 "data_size": 7936 00:17:49.422 }, 00:17:49.422 { 00:17:49.422 "name": "BaseBdev2", 00:17:49.422 "uuid": "b88af239-7b49-5d93-8916-13860411dc01", 00:17:49.422 "is_configured": true, 00:17:49.422 "data_offset": 256, 00:17:49.422 "data_size": 7936 00:17:49.422 } 00:17:49.422 ] 00:17:49.422 }' 00:17:49.422 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.422 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:49.422 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.422 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:49.422 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:49.422 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.422 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.422 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.422 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:49.422 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.422 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.422 [2024-11-21 03:26:36.865332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:49.422 [2024-11-21 03:26:36.865391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.422 [2024-11-21 03:26:36.865413] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:49.422 [2024-11-21 03:26:36.865422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.422 [2024-11-21 03:26:36.865570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.422 [2024-11-21 03:26:36.865581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:49.422 [2024-11-21 03:26:36.865623] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:49.422 [2024-11-21 03:26:36.865636] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:49.422 [2024-11-21 03:26:36.865647] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:49.422 [2024-11-21 03:26:36.865656] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:49.422 BaseBdev1 00:17:49.422 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.422 03:26:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:50.362 03:26:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:50.362 03:26:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.362 03:26:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.362 03:26:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.362 03:26:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.362 03:26:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:50.362 03:26:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.362 03:26:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.362 03:26:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.362 03:26:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.362 03:26:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.362 03:26:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.362 03:26:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.362 03:26:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.362 03:26:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.622 03:26:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.622 "name": "raid_bdev1", 00:17:50.622 "uuid": "625d19c4-0352-4f00-a88d-b3df96af79d7", 00:17:50.622 "strip_size_kb": 0, 00:17:50.622 "state": "online", 00:17:50.622 "raid_level": "raid1", 00:17:50.622 "superblock": true, 00:17:50.622 "num_base_bdevs": 2, 00:17:50.622 "num_base_bdevs_discovered": 1, 00:17:50.622 "num_base_bdevs_operational": 1, 00:17:50.622 "base_bdevs_list": [ 00:17:50.622 { 00:17:50.622 "name": null, 00:17:50.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.622 "is_configured": false, 00:17:50.622 "data_offset": 0, 00:17:50.622 "data_size": 7936 00:17:50.622 }, 00:17:50.622 { 00:17:50.622 "name": "BaseBdev2", 00:17:50.622 "uuid": "b88af239-7b49-5d93-8916-13860411dc01", 00:17:50.622 "is_configured": true, 00:17:50.622 "data_offset": 256, 00:17:50.622 "data_size": 7936 00:17:50.622 } 00:17:50.622 ] 00:17:50.622 }' 00:17:50.622 03:26:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.622 03:26:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.882 03:26:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:50.882 03:26:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:50.882 03:26:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:50.882 03:26:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:50.882 03:26:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:50.882 03:26:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.882 03:26:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.882 03:26:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.882 03:26:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.882 03:26:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.882 03:26:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:50.882 "name": "raid_bdev1", 00:17:50.882 "uuid": "625d19c4-0352-4f00-a88d-b3df96af79d7", 00:17:50.882 "strip_size_kb": 0, 00:17:50.882 "state": "online", 00:17:50.882 "raid_level": "raid1", 00:17:50.882 "superblock": true, 00:17:50.882 "num_base_bdevs": 2, 00:17:50.882 "num_base_bdevs_discovered": 1, 00:17:50.882 "num_base_bdevs_operational": 1, 00:17:50.882 "base_bdevs_list": [ 00:17:50.882 { 00:17:50.882 "name": null, 00:17:50.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.882 "is_configured": false, 00:17:50.882 "data_offset": 0, 00:17:50.882 "data_size": 7936 00:17:50.882 }, 00:17:50.882 { 00:17:50.882 "name": "BaseBdev2", 00:17:50.882 "uuid": "b88af239-7b49-5d93-8916-13860411dc01", 00:17:50.882 "is_configured": true, 00:17:50.882 "data_offset": 256, 00:17:50.882 "data_size": 7936 00:17:50.882 } 00:17:50.882 ] 00:17:50.882 }' 00:17:50.882 03:26:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:50.882 03:26:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:50.882 03:26:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.142 03:26:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:51.142 03:26:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:51.142 03:26:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:17:51.142 03:26:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:51.142 03:26:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:51.142 03:26:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.142 03:26:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:51.142 03:26:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.142 03:26:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:51.142 03:26:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.142 03:26:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.142 [2024-11-21 03:26:38.485764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:51.142 [2024-11-21 03:26:38.485915] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:51.142 [2024-11-21 03:26:38.485934] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:51.142 request: 00:17:51.142 { 00:17:51.142 "base_bdev": "BaseBdev1", 00:17:51.142 "raid_bdev": "raid_bdev1", 00:17:51.142 "method": "bdev_raid_add_base_bdev", 00:17:51.142 "req_id": 1 00:17:51.142 } 00:17:51.142 Got JSON-RPC error response 00:17:51.142 response: 00:17:51.142 { 00:17:51.142 "code": -22, 00:17:51.142 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:51.142 } 00:17:51.142 03:26:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:51.142 03:26:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:17:51.142 03:26:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:51.142 03:26:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:51.142 03:26:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:51.142 03:26:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:52.082 03:26:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:52.082 03:26:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.082 03:26:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.082 03:26:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.082 03:26:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.082 03:26:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:52.082 03:26:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.082 03:26:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.082 03:26:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.082 03:26:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.082 03:26:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.082 03:26:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.082 03:26:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.082 03:26:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.082 03:26:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.082 03:26:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.082 "name": "raid_bdev1", 00:17:52.082 "uuid": "625d19c4-0352-4f00-a88d-b3df96af79d7", 00:17:52.082 "strip_size_kb": 0, 00:17:52.082 "state": "online", 00:17:52.082 "raid_level": "raid1", 00:17:52.082 "superblock": true, 00:17:52.082 "num_base_bdevs": 2, 00:17:52.082 "num_base_bdevs_discovered": 1, 00:17:52.082 "num_base_bdevs_operational": 1, 00:17:52.082 "base_bdevs_list": [ 00:17:52.082 { 00:17:52.082 "name": null, 00:17:52.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.082 "is_configured": false, 00:17:52.082 "data_offset": 0, 00:17:52.082 "data_size": 7936 00:17:52.082 }, 00:17:52.082 { 00:17:52.082 "name": "BaseBdev2", 00:17:52.082 "uuid": "b88af239-7b49-5d93-8916-13860411dc01", 00:17:52.082 "is_configured": true, 00:17:52.082 "data_offset": 256, 00:17:52.082 "data_size": 7936 00:17:52.082 } 00:17:52.082 ] 00:17:52.082 }' 00:17:52.082 03:26:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.082 03:26:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.653 03:26:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:52.653 03:26:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.653 03:26:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:52.653 03:26:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:52.653 03:26:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.653 03:26:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.653 03:26:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.653 03:26:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.653 03:26:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.653 03:26:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.653 03:26:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.653 "name": "raid_bdev1", 00:17:52.653 "uuid": "625d19c4-0352-4f00-a88d-b3df96af79d7", 00:17:52.653 "strip_size_kb": 0, 00:17:52.654 "state": "online", 00:17:52.654 "raid_level": "raid1", 00:17:52.654 "superblock": true, 00:17:52.654 "num_base_bdevs": 2, 00:17:52.654 "num_base_bdevs_discovered": 1, 00:17:52.654 "num_base_bdevs_operational": 1, 00:17:52.654 "base_bdevs_list": [ 00:17:52.654 { 00:17:52.654 "name": null, 00:17:52.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.654 "is_configured": false, 00:17:52.654 "data_offset": 0, 00:17:52.654 "data_size": 7936 00:17:52.654 }, 00:17:52.654 { 00:17:52.654 "name": "BaseBdev2", 00:17:52.654 "uuid": "b88af239-7b49-5d93-8916-13860411dc01", 00:17:52.654 "is_configured": true, 00:17:52.654 "data_offset": 256, 00:17:52.654 "data_size": 7936 00:17:52.654 } 00:17:52.654 ] 00:17:52.654 }' 00:17:52.654 03:26:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.654 03:26:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:52.654 03:26:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.654 03:26:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:52.654 03:26:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 101371 00:17:52.654 03:26:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 101371 ']' 00:17:52.654 03:26:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 101371 00:17:52.654 03:26:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:17:52.654 03:26:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:52.654 03:26:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101371 00:17:52.654 03:26:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:52.654 03:26:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:52.654 03:26:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101371' 00:17:52.654 killing process with pid 101371 00:17:52.654 03:26:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 101371 00:17:52.654 Received shutdown signal, test time was about 60.000000 seconds 00:17:52.654 00:17:52.654 Latency(us) 00:17:52.654 [2024-11-21T03:26:40.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.654 [2024-11-21T03:26:40.220Z] =================================================================================================================== 00:17:52.654 [2024-11-21T03:26:40.220Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:52.654 [2024-11-21 03:26:40.127299] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:52.654 [2024-11-21 03:26:40.127414] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:52.654 [2024-11-21 03:26:40.127462] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:52.654 [2024-11-21 03:26:40.127474] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:52.654 03:26:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 101371 00:17:52.654 [2024-11-21 03:26:40.160100] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:52.915 03:26:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:17:52.915 00:17:52.915 real 0m16.215s 00:17:52.915 user 0m21.659s 00:17:52.915 sys 0m1.746s 00:17:52.915 03:26:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:52.915 ************************************ 00:17:52.915 END TEST raid_rebuild_test_sb_md_interleaved 00:17:52.915 ************************************ 00:17:52.915 03:26:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.915 03:26:40 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:17:52.915 03:26:40 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:17:52.915 03:26:40 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 101371 ']' 00:17:52.915 03:26:40 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 101371 00:17:52.915 03:26:40 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:17:52.915 ************************************ 00:17:52.915 END TEST bdev_raid 00:17:52.915 ************************************ 00:17:52.915 00:17:52.915 real 10m5.369s 00:17:52.915 user 14m19.234s 00:17:52.915 sys 1m54.632s 00:17:52.915 03:26:40 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:52.915 03:26:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:53.175 03:26:40 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:53.175 03:26:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:53.175 03:26:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:53.175 03:26:40 -- common/autotest_common.sh@10 -- # set +x 00:17:53.175 ************************************ 00:17:53.175 START TEST spdkcli_raid 00:17:53.175 ************************************ 00:17:53.175 03:26:40 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:53.175 * Looking for test storage... 00:17:53.175 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:53.175 03:26:40 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:53.175 03:26:40 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:17:53.175 03:26:40 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:53.435 03:26:40 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:53.435 03:26:40 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:53.435 03:26:40 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:53.435 03:26:40 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:53.435 03:26:40 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:17:53.435 03:26:40 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:17:53.435 03:26:40 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:17:53.435 03:26:40 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:17:53.435 03:26:40 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:17:53.435 03:26:40 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:17:53.435 03:26:40 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:17:53.435 03:26:40 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:53.435 03:26:40 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:17:53.435 03:26:40 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:17:53.435 03:26:40 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:53.435 03:26:40 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:53.435 03:26:40 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:17:53.435 03:26:40 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:17:53.435 03:26:40 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:53.435 03:26:40 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:17:53.435 03:26:40 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:53.435 03:26:40 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:17:53.435 03:26:40 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:17:53.435 03:26:40 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:53.435 03:26:40 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:17:53.435 03:26:40 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:53.435 03:26:40 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:53.435 03:26:40 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:53.435 03:26:40 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:17:53.435 03:26:40 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:53.435 03:26:40 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:53.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.435 --rc genhtml_branch_coverage=1 00:17:53.435 --rc genhtml_function_coverage=1 00:17:53.435 --rc genhtml_legend=1 00:17:53.435 --rc geninfo_all_blocks=1 00:17:53.435 --rc geninfo_unexecuted_blocks=1 00:17:53.435 00:17:53.435 ' 00:17:53.435 03:26:40 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:53.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.436 --rc genhtml_branch_coverage=1 00:17:53.436 --rc genhtml_function_coverage=1 00:17:53.436 --rc genhtml_legend=1 00:17:53.436 --rc geninfo_all_blocks=1 00:17:53.436 --rc geninfo_unexecuted_blocks=1 00:17:53.436 00:17:53.436 ' 00:17:53.436 03:26:40 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:53.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.436 --rc genhtml_branch_coverage=1 00:17:53.436 --rc genhtml_function_coverage=1 00:17:53.436 --rc genhtml_legend=1 00:17:53.436 --rc geninfo_all_blocks=1 00:17:53.436 --rc geninfo_unexecuted_blocks=1 00:17:53.436 00:17:53.436 ' 00:17:53.436 03:26:40 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:53.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.436 --rc genhtml_branch_coverage=1 00:17:53.436 --rc genhtml_function_coverage=1 00:17:53.436 --rc genhtml_legend=1 00:17:53.436 --rc geninfo_all_blocks=1 00:17:53.436 --rc geninfo_unexecuted_blocks=1 00:17:53.436 00:17:53.436 ' 00:17:53.436 03:26:40 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:53.436 03:26:40 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:53.436 03:26:40 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:53.436 03:26:40 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:17:53.436 03:26:40 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:17:53.436 03:26:40 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:17:53.436 03:26:40 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:17:53.436 03:26:40 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:17:53.436 03:26:40 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:17:53.436 03:26:40 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:17:53.436 03:26:40 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:17:53.436 03:26:40 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:17:53.436 03:26:40 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:17:53.436 03:26:40 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:17:53.436 03:26:40 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:17:53.436 03:26:40 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:17:53.436 03:26:40 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:17:53.436 03:26:40 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:17:53.436 03:26:40 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:17:53.436 03:26:40 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:17:53.436 03:26:40 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:17:53.436 03:26:40 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:17:53.436 03:26:40 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:17:53.436 03:26:40 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:17:53.436 03:26:40 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:17:53.436 03:26:40 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:53.436 03:26:40 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:53.436 03:26:40 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:53.436 03:26:40 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:53.436 03:26:40 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:53.436 03:26:40 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:53.436 03:26:40 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:17:53.436 03:26:40 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:17:53.436 03:26:40 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:53.436 03:26:40 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:53.436 03:26:40 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:17:53.436 03:26:40 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=102036 00:17:53.436 03:26:40 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:17:53.436 03:26:40 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 102036 00:17:53.436 03:26:40 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 102036 ']' 00:17:53.436 03:26:40 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.436 03:26:40 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:53.436 03:26:40 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.436 03:26:40 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:53.436 03:26:40 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:53.436 [2024-11-21 03:26:40.915159] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:17:53.436 [2024-11-21 03:26:40.915300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102036 ] 00:17:53.696 [2024-11-21 03:26:41.056851] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:53.696 [2024-11-21 03:26:41.096062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:53.696 [2024-11-21 03:26:41.123623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.696 [2024-11-21 03:26:41.123715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.266 03:26:41 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.266 03:26:41 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:17:54.266 03:26:41 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:17:54.266 03:26:41 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:54.266 03:26:41 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:54.266 03:26:41 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:17:54.266 03:26:41 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:54.266 03:26:41 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:54.266 03:26:41 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:17:54.266 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:17:54.266 ' 00:17:56.176 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:17:56.176 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:17:56.176 03:26:43 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:17:56.176 03:26:43 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:56.176 03:26:43 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:56.176 03:26:43 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:17:56.176 03:26:43 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:56.176 03:26:43 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:56.176 03:26:43 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:17:56.176 ' 00:17:57.116 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:17:57.116 03:26:44 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:17:57.116 03:26:44 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:57.116 03:26:44 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:57.116 03:26:44 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:17:57.116 03:26:44 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:57.116 03:26:44 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:57.116 03:26:44 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:17:57.116 03:26:44 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:17:57.686 03:26:45 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:17:57.686 03:26:45 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:17:57.686 03:26:45 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:17:57.686 03:26:45 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:57.686 03:26:45 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:57.946 03:26:45 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:17:57.946 03:26:45 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:57.946 03:26:45 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:57.946 03:26:45 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:17:57.946 ' 00:17:58.885 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:17:58.885 03:26:46 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:17:58.885 03:26:46 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:58.885 03:26:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:58.885 03:26:46 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:17:58.885 03:26:46 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:58.885 03:26:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:58.885 03:26:46 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:17:58.885 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:17:58.885 ' 00:18:00.266 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:18:00.266 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:18:00.266 03:26:47 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:18:00.266 03:26:47 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:00.266 03:26:47 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:00.526 03:26:47 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 102036 00:18:00.526 03:26:47 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 102036 ']' 00:18:00.526 03:26:47 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 102036 00:18:00.526 03:26:47 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:18:00.526 03:26:47 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.526 03:26:47 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102036 00:18:00.526 killing process with pid 102036 00:18:00.526 03:26:47 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:00.526 03:26:47 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:00.526 03:26:47 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102036' 00:18:00.526 03:26:47 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 102036 00:18:00.526 03:26:47 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 102036 00:18:00.785 03:26:48 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:18:00.785 03:26:48 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 102036 ']' 00:18:00.786 03:26:48 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 102036 00:18:00.786 03:26:48 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 102036 ']' 00:18:00.786 Process with pid 102036 is not found 00:18:00.786 03:26:48 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 102036 00:18:00.786 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (102036) - No such process 00:18:00.786 03:26:48 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 102036 is not found' 00:18:00.786 03:26:48 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:18:00.786 03:26:48 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:18:00.786 03:26:48 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:18:00.786 03:26:48 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:18:00.786 00:18:00.786 real 0m7.761s 00:18:00.786 user 0m16.338s 00:18:00.786 sys 0m1.153s 00:18:00.786 03:26:48 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:00.786 03:26:48 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:00.786 ************************************ 00:18:00.786 END TEST spdkcli_raid 00:18:00.786 ************************************ 00:18:01.046 03:26:48 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:01.046 03:26:48 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:01.046 03:26:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:01.046 03:26:48 -- common/autotest_common.sh@10 -- # set +x 00:18:01.046 ************************************ 00:18:01.046 START TEST blockdev_raid5f 00:18:01.046 ************************************ 00:18:01.046 03:26:48 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:01.046 * Looking for test storage... 00:18:01.046 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:01.046 03:26:48 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:01.046 03:26:48 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:18:01.046 03:26:48 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:01.046 03:26:48 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:01.046 03:26:48 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:01.046 03:26:48 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:01.046 03:26:48 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:01.046 03:26:48 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:18:01.046 03:26:48 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:18:01.046 03:26:48 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:18:01.046 03:26:48 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:18:01.046 03:26:48 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:18:01.046 03:26:48 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:18:01.046 03:26:48 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:18:01.046 03:26:48 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:01.046 03:26:48 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:18:01.046 03:26:48 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:18:01.046 03:26:48 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:01.046 03:26:48 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:01.046 03:26:48 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:18:01.046 03:26:48 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:18:01.046 03:26:48 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:01.046 03:26:48 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:18:01.046 03:26:48 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:18:01.046 03:26:48 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:18:01.046 03:26:48 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:18:01.046 03:26:48 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:01.046 03:26:48 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:18:01.046 03:26:48 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:18:01.046 03:26:48 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:01.046 03:26:48 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:01.046 03:26:48 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:18:01.307 03:26:48 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:01.307 03:26:48 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:01.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.307 --rc genhtml_branch_coverage=1 00:18:01.307 --rc genhtml_function_coverage=1 00:18:01.307 --rc genhtml_legend=1 00:18:01.307 --rc geninfo_all_blocks=1 00:18:01.307 --rc geninfo_unexecuted_blocks=1 00:18:01.307 00:18:01.307 ' 00:18:01.307 03:26:48 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:01.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.307 --rc genhtml_branch_coverage=1 00:18:01.307 --rc genhtml_function_coverage=1 00:18:01.307 --rc genhtml_legend=1 00:18:01.307 --rc geninfo_all_blocks=1 00:18:01.307 --rc geninfo_unexecuted_blocks=1 00:18:01.307 00:18:01.307 ' 00:18:01.307 03:26:48 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:01.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.307 --rc genhtml_branch_coverage=1 00:18:01.307 --rc genhtml_function_coverage=1 00:18:01.307 --rc genhtml_legend=1 00:18:01.307 --rc geninfo_all_blocks=1 00:18:01.307 --rc geninfo_unexecuted_blocks=1 00:18:01.307 00:18:01.307 ' 00:18:01.307 03:26:48 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:01.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.307 --rc genhtml_branch_coverage=1 00:18:01.307 --rc genhtml_function_coverage=1 00:18:01.307 --rc genhtml_legend=1 00:18:01.307 --rc geninfo_all_blocks=1 00:18:01.307 --rc geninfo_unexecuted_blocks=1 00:18:01.307 00:18:01.307 ' 00:18:01.307 03:26:48 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:01.307 03:26:48 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:18:01.307 03:26:48 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:01.307 03:26:48 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:01.307 03:26:48 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:01.307 03:26:48 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:01.307 03:26:48 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:01.307 03:26:48 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:01.307 03:26:48 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:18:01.307 03:26:48 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:18:01.307 03:26:48 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:18:01.307 03:26:48 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:18:01.307 03:26:48 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:18:01.307 03:26:48 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:18:01.307 03:26:48 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:18:01.307 03:26:48 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:18:01.307 03:26:48 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:18:01.307 03:26:48 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:18:01.307 03:26:48 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:18:01.307 03:26:48 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:18:01.307 03:26:48 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:18:01.307 03:26:48 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:18:01.307 03:26:48 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:18:01.307 03:26:48 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:18:01.307 03:26:48 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=102294 00:18:01.307 03:26:48 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:01.307 03:26:48 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:01.307 03:26:48 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 102294 00:18:01.307 03:26:48 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 102294 ']' 00:18:01.307 03:26:48 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.307 03:26:48 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:01.307 03:26:48 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.307 03:26:48 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:01.307 03:26:48 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:01.307 [2024-11-21 03:26:48.718384] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:18:01.307 [2024-11-21 03:26:48.718585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102294 ] 00:18:01.307 [2024-11-21 03:26:48.852787] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:01.567 [2024-11-21 03:26:48.890772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.567 [2024-11-21 03:26:48.916372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.140 03:26:49 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:02.140 03:26:49 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:18:02.140 03:26:49 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:18:02.140 03:26:49 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:18:02.140 03:26:49 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:18:02.140 03:26:49 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.140 03:26:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:02.140 Malloc0 00:18:02.140 Malloc1 00:18:02.140 Malloc2 00:18:02.140 03:26:49 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.140 03:26:49 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:18:02.140 03:26:49 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.140 03:26:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:02.140 03:26:49 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.140 03:26:49 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:18:02.140 03:26:49 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:18:02.140 03:26:49 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.140 03:26:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:02.140 03:26:49 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.141 03:26:49 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:18:02.141 03:26:49 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.141 03:26:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:02.141 03:26:49 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.141 03:26:49 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:02.141 03:26:49 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.141 03:26:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:02.141 03:26:49 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.141 03:26:49 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:18:02.141 03:26:49 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:18:02.141 03:26:49 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:18:02.141 03:26:49 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.141 03:26:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:02.401 03:26:49 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.401 03:26:49 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:18:02.401 03:26:49 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "aa1b99cb-1421-4f92-a61b-3c724db55e84"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "aa1b99cb-1421-4f92-a61b-3c724db55e84",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "aa1b99cb-1421-4f92-a61b-3c724db55e84",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "65a2367c-373d-47b4-a2f5-b865e83411df",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "fa559841-0c32-476a-8643-50fd7d2faab4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "659a17c0-3510-44c5-be63-26128a5c5a30",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:02.401 03:26:49 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:18:02.401 03:26:49 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:18:02.401 03:26:49 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:18:02.401 03:26:49 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:18:02.401 03:26:49 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 102294 00:18:02.401 03:26:49 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 102294 ']' 00:18:02.401 03:26:49 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 102294 00:18:02.401 03:26:49 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:18:02.401 03:26:49 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:02.401 03:26:49 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102294 00:18:02.401 03:26:49 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:02.401 03:26:49 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:02.401 killing process with pid 102294 00:18:02.401 03:26:49 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102294' 00:18:02.401 03:26:49 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 102294 00:18:02.401 03:26:49 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 102294 00:18:02.662 03:26:50 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:02.662 03:26:50 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:02.662 03:26:50 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:02.662 03:26:50 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.662 03:26:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:02.662 ************************************ 00:18:02.662 START TEST bdev_hello_world 00:18:02.662 ************************************ 00:18:02.662 03:26:50 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:02.922 [2024-11-21 03:26:50.301472] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:18:02.922 [2024-11-21 03:26:50.301586] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102334 ] 00:18:02.922 [2024-11-21 03:26:50.436256] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:02.922 [2024-11-21 03:26:50.473090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.182 [2024-11-21 03:26:50.500725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.182 [2024-11-21 03:26:50.677387] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:03.182 [2024-11-21 03:26:50.677445] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:18:03.182 [2024-11-21 03:26:50.677472] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:03.182 [2024-11-21 03:26:50.677759] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:03.182 [2024-11-21 03:26:50.677905] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:03.182 [2024-11-21 03:26:50.677931] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:03.182 [2024-11-21 03:26:50.677976] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:03.182 00:18:03.182 [2024-11-21 03:26:50.677993] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:03.456 00:18:03.456 real 0m0.688s 00:18:03.456 user 0m0.364s 00:18:03.456 sys 0m0.219s 00:18:03.456 03:26:50 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:03.456 03:26:50 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:03.456 ************************************ 00:18:03.456 END TEST bdev_hello_world 00:18:03.456 ************************************ 00:18:03.456 03:26:50 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:18:03.456 03:26:50 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:03.456 03:26:50 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:03.456 03:26:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:03.456 ************************************ 00:18:03.456 START TEST bdev_bounds 00:18:03.456 ************************************ 00:18:03.456 03:26:50 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:18:03.456 03:26:50 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=102359 00:18:03.456 Process bdevio pid: 102359 00:18:03.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.456 03:26:50 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:03.456 03:26:50 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:03.456 03:26:50 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 102359' 00:18:03.456 03:26:50 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 102359 00:18:03.456 03:26:50 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 102359 ']' 00:18:03.456 03:26:50 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.456 03:26:50 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:03.456 03:26:50 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.456 03:26:50 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:03.456 03:26:50 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:03.716 [2024-11-21 03:26:51.064112] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:18:03.716 [2024-11-21 03:26:51.064299] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102359 ] 00:18:03.716 [2024-11-21 03:26:51.200075] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:03.716 [2024-11-21 03:26:51.240320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:03.716 [2024-11-21 03:26:51.269924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.716 [2024-11-21 03:26:51.270179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.716 [2024-11-21 03:26:51.270052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.657 03:26:51 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:04.657 03:26:51 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:18:04.657 03:26:51 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:04.657 I/O targets: 00:18:04.657 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:18:04.657 00:18:04.657 00:18:04.657 CUnit - A unit testing framework for C - Version 2.1-3 00:18:04.657 http://cunit.sourceforge.net/ 00:18:04.657 00:18:04.657 00:18:04.657 Suite: bdevio tests on: raid5f 00:18:04.657 Test: blockdev write read block ...passed 00:18:04.657 Test: blockdev write zeroes read block ...passed 00:18:04.657 Test: blockdev write zeroes read no split ...passed 00:18:04.657 Test: blockdev write zeroes read split ...passed 00:18:04.657 Test: blockdev write zeroes read split partial ...passed 00:18:04.657 Test: blockdev reset ...passed 00:18:04.657 Test: blockdev write read 8 blocks ...passed 00:18:04.657 Test: blockdev write read size > 128k ...passed 00:18:04.657 Test: blockdev write read invalid size ...passed 00:18:04.657 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:04.657 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:04.657 Test: blockdev write read max offset ...passed 00:18:04.657 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:04.657 Test: blockdev writev readv 8 blocks ...passed 00:18:04.657 Test: blockdev writev readv 30 x 1block ...passed 00:18:04.657 Test: blockdev writev readv block ...passed 00:18:04.657 Test: blockdev writev readv size > 128k ...passed 00:18:04.657 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:04.657 Test: blockdev comparev and writev ...passed 00:18:04.657 Test: blockdev nvme passthru rw ...passed 00:18:04.657 Test: blockdev nvme passthru vendor specific ...passed 00:18:04.657 Test: blockdev nvme admin passthru ...passed 00:18:04.657 Test: blockdev copy ...passed 00:18:04.657 00:18:04.657 Run Summary: Type Total Ran Passed Failed Inactive 00:18:04.657 suites 1 1 n/a 0 0 00:18:04.657 tests 23 23 23 0 0 00:18:04.657 asserts 130 130 130 0 n/a 00:18:04.657 00:18:04.657 Elapsed time = 0.335 seconds 00:18:04.657 0 00:18:04.657 03:26:52 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 102359 00:18:04.657 03:26:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 102359 ']' 00:18:04.657 03:26:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 102359 00:18:04.657 03:26:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:18:04.657 03:26:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.657 03:26:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102359 00:18:04.657 03:26:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:04.657 03:26:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:04.657 03:26:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102359' 00:18:04.657 killing process with pid 102359 00:18:04.657 03:26:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 102359 00:18:04.657 03:26:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 102359 00:18:04.918 03:26:52 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:18:04.918 00:18:04.918 real 0m1.432s 00:18:04.918 user 0m3.387s 00:18:04.918 sys 0m0.360s 00:18:04.918 03:26:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:04.918 03:26:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:04.918 ************************************ 00:18:04.918 END TEST bdev_bounds 00:18:04.918 ************************************ 00:18:04.918 03:26:52 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:04.918 03:26:52 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:04.918 03:26:52 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:04.918 03:26:52 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:05.178 ************************************ 00:18:05.178 START TEST bdev_nbd 00:18:05.178 ************************************ 00:18:05.178 03:26:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:05.178 03:26:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:18:05.178 03:26:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:18:05.178 03:26:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:05.178 03:26:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:05.178 03:26:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:18:05.178 03:26:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:18:05.178 03:26:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:18:05.178 03:26:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:18:05.178 03:26:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:18:05.178 03:26:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:18:05.178 03:26:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:18:05.178 03:26:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:18:05.178 03:26:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:18:05.178 03:26:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:18:05.178 03:26:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:18:05.178 03:26:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=102408 00:18:05.178 03:26:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:05.178 03:26:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:18:05.178 03:26:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 102408 /var/tmp/spdk-nbd.sock 00:18:05.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:05.178 03:26:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 102408 ']' 00:18:05.178 03:26:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:05.178 03:26:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:05.178 03:26:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:05.178 03:26:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:05.178 03:26:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:05.178 [2024-11-21 03:26:52.603500] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:18:05.178 [2024-11-21 03:26:52.603749] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.439 [2024-11-21 03:26:52.747214] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:05.439 [2024-11-21 03:26:52.786599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.439 [2024-11-21 03:26:52.813095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.009 03:26:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.009 03:26:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:18:06.009 03:26:53 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:18:06.009 03:26:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:06.009 03:26:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:18:06.009 03:26:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:18:06.009 03:26:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:18:06.009 03:26:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:06.009 03:26:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:18:06.009 03:26:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:18:06.009 03:26:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:18:06.009 03:26:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:18:06.009 03:26:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:18:06.009 03:26:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:18:06.009 03:26:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:18:06.278 03:26:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:18:06.278 03:26:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:18:06.278 03:26:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:18:06.278 03:26:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:06.278 03:26:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:06.278 03:26:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:06.278 03:26:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:06.278 03:26:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:06.278 03:26:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:06.278 03:26:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:06.278 03:26:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:06.278 03:26:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:06.278 1+0 records in 00:18:06.278 1+0 records out 00:18:06.278 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334611 s, 12.2 MB/s 00:18:06.278 03:26:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:06.278 03:26:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:06.278 03:26:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:06.278 03:26:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:06.278 03:26:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:06.278 03:26:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:06.278 03:26:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:18:06.278 03:26:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:06.559 03:26:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:18:06.559 { 00:18:06.559 "nbd_device": "/dev/nbd0", 00:18:06.559 "bdev_name": "raid5f" 00:18:06.559 } 00:18:06.559 ]' 00:18:06.559 03:26:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:18:06.559 03:26:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:18:06.559 { 00:18:06.559 "nbd_device": "/dev/nbd0", 00:18:06.559 "bdev_name": "raid5f" 00:18:06.559 } 00:18:06.559 ]' 00:18:06.559 03:26:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:18:06.559 03:26:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:06.559 03:26:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:06.559 03:26:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:06.559 03:26:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:06.559 03:26:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:06.559 03:26:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:06.559 03:26:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:06.834 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:06.834 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:06.834 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:06.834 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:06.834 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:06.834 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:06.834 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:06.834 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:06.834 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:06.834 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:06.834 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:06.834 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:06.834 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:06.834 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:07.094 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:07.094 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:07.094 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:07.094 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:07.094 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:07.094 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:07.094 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:18:07.094 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:18:07.094 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:18:07.094 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:18:07.094 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:07.094 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:18:07.094 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:07.094 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:18:07.094 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:07.094 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:18:07.094 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:07.094 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:18:07.094 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:07.094 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:07.094 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:07.094 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:18:07.094 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:07.094 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:07.094 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:18:07.094 /dev/nbd0 00:18:07.354 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:07.354 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:07.354 03:26:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:07.354 03:26:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:07.354 03:26:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:07.354 03:26:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:07.354 03:26:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:07.354 03:26:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:07.354 03:26:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:07.355 03:26:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:07.355 03:26:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:07.355 1+0 records in 00:18:07.355 1+0 records out 00:18:07.355 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562931 s, 7.3 MB/s 00:18:07.355 03:26:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:07.355 03:26:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:07.355 03:26:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:07.355 03:26:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:07.355 03:26:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:07.355 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:07.355 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:07.355 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:07.355 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:07.355 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:07.355 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:07.355 { 00:18:07.355 "nbd_device": "/dev/nbd0", 00:18:07.355 "bdev_name": "raid5f" 00:18:07.355 } 00:18:07.355 ]' 00:18:07.355 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:07.355 { 00:18:07.355 "nbd_device": "/dev/nbd0", 00:18:07.355 "bdev_name": "raid5f" 00:18:07.355 } 00:18:07.355 ]' 00:18:07.355 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:07.614 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:18:07.614 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:07.614 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:18:07.614 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:18:07.614 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:18:07.615 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:18:07.615 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:18:07.615 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:18:07.615 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:18:07.615 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:07.615 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:07.615 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:07.615 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:07.615 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:18:07.615 256+0 records in 00:18:07.615 256+0 records out 00:18:07.615 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143643 s, 73.0 MB/s 00:18:07.615 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:07.615 03:26:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:07.615 256+0 records in 00:18:07.615 256+0 records out 00:18:07.615 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278545 s, 37.6 MB/s 00:18:07.615 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:18:07.615 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:18:07.615 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:07.615 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:07.615 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:07.615 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:07.615 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:07.615 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:07.615 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:18:07.615 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:07.615 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:07.615 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:07.615 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:07.615 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:07.615 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:07.615 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:07.615 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:07.875 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:07.875 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:07.875 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:07.875 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:07.875 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:07.875 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:07.875 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:07.875 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:07.875 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:07.875 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:07.875 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:08.136 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:08.136 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:08.136 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:08.136 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:08.136 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:08.136 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:08.136 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:08.136 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:08.136 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:08.136 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:18:08.136 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:08.136 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:18:08.136 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:08.136 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:08.136 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:18:08.136 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:18:08.136 malloc_lvol_verify 00:18:08.396 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:18:08.396 b6fefc96-2456-4765-b1c4-8408aa826398 00:18:08.396 03:26:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:18:08.655 989d76c1-732a-413a-956f-d26285cc6d12 00:18:08.655 03:26:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:18:08.915 /dev/nbd0 00:18:08.915 03:26:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:18:08.915 03:26:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:18:08.915 03:26:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:18:08.915 03:26:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:18:08.915 03:26:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:18:08.915 mke2fs 1.47.0 (5-Feb-2023) 00:18:08.915 Discarding device blocks: 0/4096 done 00:18:08.915 Creating filesystem with 4096 1k blocks and 1024 inodes 00:18:08.915 00:18:08.915 Allocating group tables: 0/1 done 00:18:08.915 Writing inode tables: 0/1 done 00:18:08.915 Creating journal (1024 blocks): done 00:18:08.915 Writing superblocks and filesystem accounting information: 0/1 done 00:18:08.916 00:18:08.916 03:26:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:08.916 03:26:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:08.916 03:26:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:08.916 03:26:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:08.916 03:26:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:08.916 03:26:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:08.916 03:26:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:09.175 03:26:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:09.175 03:26:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:09.175 03:26:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:09.175 03:26:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:09.175 03:26:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:09.175 03:26:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:09.175 03:26:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:09.175 03:26:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:09.175 03:26:56 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 102408 00:18:09.175 03:26:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 102408 ']' 00:18:09.175 03:26:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 102408 00:18:09.175 03:26:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:18:09.175 03:26:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:09.176 03:26:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102408 00:18:09.176 03:26:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:09.176 03:26:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:09.176 killing process with pid 102408 00:18:09.176 03:26:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102408' 00:18:09.176 03:26:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 102408 00:18:09.176 03:26:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 102408 00:18:09.437 03:26:56 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:18:09.437 00:18:09.437 real 0m4.306s 00:18:09.437 user 0m6.217s 00:18:09.437 sys 0m1.295s 00:18:09.437 03:26:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:09.437 03:26:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:09.437 ************************************ 00:18:09.437 END TEST bdev_nbd 00:18:09.437 ************************************ 00:18:09.437 03:26:56 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:18:09.437 03:26:56 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:18:09.437 03:26:56 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:18:09.437 03:26:56 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:18:09.437 03:26:56 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:09.437 03:26:56 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:09.437 03:26:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:09.437 ************************************ 00:18:09.437 START TEST bdev_fio 00:18:09.437 ************************************ 00:18:09.437 03:26:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:18:09.437 03:26:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:18:09.437 03:26:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:18:09.437 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:18:09.437 03:26:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:18:09.437 03:26:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:18:09.437 03:26:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:18:09.437 03:26:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:18:09.437 03:26:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:18:09.437 03:26:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:09.437 03:26:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:18:09.437 03:26:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:18:09.437 03:26:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:18:09.437 03:26:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:18:09.437 03:26:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:09.437 03:26:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:18:09.437 03:26:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:18:09.437 03:26:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:09.437 03:26:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:18:09.437 03:26:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:18:09.437 03:26:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:18:09.437 03:26:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:18:09.437 03:26:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:18:09.697 03:26:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:18:09.697 03:26:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:18:09.697 03:26:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:09.697 03:26:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:18:09.697 03:26:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:18:09.697 03:26:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:18:09.697 03:26:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:09.697 03:26:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:18:09.697 03:26:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:09.697 03:26:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:09.697 ************************************ 00:18:09.697 START TEST bdev_fio_rw_verify 00:18:09.697 ************************************ 00:18:09.697 03:26:57 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:09.697 03:26:57 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:09.697 03:26:57 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:09.697 03:26:57 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:09.697 03:26:57 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:09.697 03:26:57 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:09.697 03:26:57 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:18:09.697 03:26:57 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:09.697 03:26:57 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:09.697 03:26:57 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:09.697 03:26:57 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:18:09.697 03:26:57 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:09.697 03:26:57 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:09.697 03:26:57 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:09.697 03:26:57 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:18:09.697 03:26:57 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:09.697 03:26:57 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:09.957 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:09.957 fio-3.35 00:18:09.957 Starting 1 thread 00:18:22.177 00:18:22.177 job_raid5f: (groupid=0, jobs=1): err= 0: pid=102599: Thu Nov 21 03:27:07 2024 00:18:22.177 read: IOPS=12.2k, BW=47.6MiB/s (50.0MB/s)(476MiB/10000msec) 00:18:22.177 slat (usec): min=17, max=300, avg=19.36, stdev= 3.18 00:18:22.177 clat (usec): min=12, max=1200, avg=131.51, stdev=49.16 00:18:22.177 lat (usec): min=31, max=1219, avg=150.87, stdev=50.19 00:18:22.177 clat percentiles (usec): 00:18:22.177 | 50.000th=[ 135], 99.000th=[ 219], 99.900th=[ 392], 99.990th=[ 881], 00:18:22.177 | 99.999th=[ 1188] 00:18:22.177 write: IOPS=12.8k, BW=49.9MiB/s (52.3MB/s)(493MiB/9874msec); 0 zone resets 00:18:22.177 slat (usec): min=7, max=257, avg=17.04, stdev= 4.09 00:18:22.177 clat (usec): min=58, max=1700, avg=300.60, stdev=42.63 00:18:22.177 lat (usec): min=74, max=1958, avg=317.64, stdev=43.72 00:18:22.177 clat percentiles (usec): 00:18:22.178 | 50.000th=[ 306], 99.000th=[ 383], 99.900th=[ 594], 99.990th=[ 1012], 00:18:22.178 | 99.999th=[ 1598] 00:18:22.178 bw ( KiB/s): min=47920, max=53776, per=98.72%, avg=50461.89, stdev=1485.94, samples=19 00:18:22.178 iops : min=11980, max=13444, avg=12615.47, stdev=371.48, samples=19 00:18:22.178 lat (usec) : 20=0.01%, 50=0.01%, 100=16.31%, 250=39.14%, 500=44.44% 00:18:22.178 lat (usec) : 750=0.06%, 1000=0.03% 00:18:22.178 lat (msec) : 2=0.01% 00:18:22.178 cpu : usr=98.72%, sys=0.49%, ctx=33, majf=0, minf=13088 00:18:22.178 IO depths : 1=7.6%, 2=19.8%, 4=55.2%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:22.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.178 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.178 issued rwts: total=121976,126184,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.178 latency : target=0, window=0, percentile=100.00%, depth=8 00:18:22.178 00:18:22.178 Run status group 0 (all jobs): 00:18:22.178 READ: bw=47.6MiB/s (50.0MB/s), 47.6MiB/s-47.6MiB/s (50.0MB/s-50.0MB/s), io=476MiB (500MB), run=10000-10000msec 00:18:22.178 WRITE: bw=49.9MiB/s (52.3MB/s), 49.9MiB/s-49.9MiB/s (52.3MB/s-52.3MB/s), io=493MiB (517MB), run=9874-9874msec 00:18:22.178 ----------------------------------------------------- 00:18:22.178 Suppressions used: 00:18:22.178 count bytes template 00:18:22.178 1 7 /usr/src/fio/parse.c 00:18:22.178 421 40416 /usr/src/fio/iolog.c 00:18:22.178 1 8 libtcmalloc_minimal.so 00:18:22.178 1 904 libcrypto.so 00:18:22.178 ----------------------------------------------------- 00:18:22.178 00:18:22.178 00:18:22.178 real 0m11.245s 00:18:22.178 user 0m11.449s 00:18:22.178 sys 0m0.522s 00:18:22.178 03:27:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:22.178 03:27:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:18:22.178 ************************************ 00:18:22.178 END TEST bdev_fio_rw_verify 00:18:22.178 ************************************ 00:18:22.178 03:27:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:18:22.178 03:27:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:22.178 03:27:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:18:22.178 03:27:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:22.178 03:27:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:18:22.178 03:27:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:18:22.178 03:27:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:18:22.178 03:27:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:18:22.178 03:27:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:22.178 03:27:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:18:22.178 03:27:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:18:22.178 03:27:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:22.178 03:27:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:18:22.178 03:27:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:18:22.178 03:27:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:18:22.178 03:27:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:18:22.178 03:27:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "aa1b99cb-1421-4f92-a61b-3c724db55e84"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "aa1b99cb-1421-4f92-a61b-3c724db55e84",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "aa1b99cb-1421-4f92-a61b-3c724db55e84",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "65a2367c-373d-47b4-a2f5-b865e83411df",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "fa559841-0c32-476a-8643-50fd7d2faab4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "659a17c0-3510-44c5-be63-26128a5c5a30",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:22.178 03:27:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:18:22.178 03:27:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:18:22.178 03:27:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:22.178 /home/vagrant/spdk_repo/spdk 00:18:22.178 03:27:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:18:22.178 03:27:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:18:22.178 03:27:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:18:22.178 00:18:22.178 real 0m11.542s 00:18:22.178 user 0m11.568s 00:18:22.178 sys 0m0.676s 00:18:22.178 03:27:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:22.178 03:27:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:22.178 ************************************ 00:18:22.178 END TEST bdev_fio 00:18:22.178 ************************************ 00:18:22.178 03:27:08 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:22.178 03:27:08 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:22.178 03:27:08 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:18:22.178 03:27:08 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:22.178 03:27:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:22.178 ************************************ 00:18:22.178 START TEST bdev_verify 00:18:22.178 ************************************ 00:18:22.178 03:27:08 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:22.178 [2024-11-21 03:27:08.573956] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:18:22.178 [2024-11-21 03:27:08.574085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102750 ] 00:18:22.178 [2024-11-21 03:27:08.713959] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:22.178 [2024-11-21 03:27:08.753493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:22.178 [2024-11-21 03:27:08.786373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.178 [2024-11-21 03:27:08.786476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.178 Running I/O for 5 seconds... 00:18:23.818 10957.00 IOPS, 42.80 MiB/s [2024-11-21T03:27:12.322Z] 11068.00 IOPS, 43.23 MiB/s [2024-11-21T03:27:13.261Z] 11107.67 IOPS, 43.39 MiB/s [2024-11-21T03:27:14.200Z] 11127.50 IOPS, 43.47 MiB/s [2024-11-21T03:27:14.200Z] 11139.40 IOPS, 43.51 MiB/s 00:18:26.634 Latency(us) 00:18:26.634 [2024-11-21T03:27:14.200Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.634 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:26.634 Verification LBA range: start 0x0 length 0x2000 00:18:26.634 raid5f : 5.02 4366.53 17.06 0.00 0.00 44103.06 260.62 31302.82 00:18:26.634 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:26.634 Verification LBA range: start 0x2000 length 0x2000 00:18:26.634 raid5f : 5.02 6777.20 26.47 0.00 0.00 28405.72 264.19 21363.60 00:18:26.634 [2024-11-21T03:27:14.200Z] =================================================================================================================== 00:18:26.634 [2024-11-21T03:27:14.200Z] Total : 11143.72 43.53 0.00 0.00 34558.59 260.62 31302.82 00:18:26.894 00:18:26.894 real 0m5.761s 00:18:26.894 user 0m10.707s 00:18:26.894 sys 0m0.246s 00:18:26.894 03:27:14 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:26.894 03:27:14 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:18:26.894 ************************************ 00:18:26.894 END TEST bdev_verify 00:18:26.894 ************************************ 00:18:26.894 03:27:14 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:26.894 03:27:14 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:18:26.894 03:27:14 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:26.894 03:27:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:26.894 ************************************ 00:18:26.894 START TEST bdev_verify_big_io 00:18:26.894 ************************************ 00:18:26.894 03:27:14 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:26.894 [2024-11-21 03:27:14.410044] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:18:26.894 [2024-11-21 03:27:14.410153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102833 ] 00:18:27.154 [2024-11-21 03:27:14.550071] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:27.154 [2024-11-21 03:27:14.588511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:27.154 [2024-11-21 03:27:14.616260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.154 [2024-11-21 03:27:14.616369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.414 Running I/O for 5 seconds... 00:18:29.736 695.00 IOPS, 43.44 MiB/s [2024-11-21T03:27:18.251Z] 761.00 IOPS, 47.56 MiB/s [2024-11-21T03:27:19.186Z] 803.67 IOPS, 50.23 MiB/s [2024-11-21T03:27:20.132Z] 825.00 IOPS, 51.56 MiB/s [2024-11-21T03:27:20.132Z] 838.00 IOPS, 52.38 MiB/s 00:18:32.566 Latency(us) 00:18:32.566 [2024-11-21T03:27:20.132Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.566 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:32.566 Verification LBA range: start 0x0 length 0x200 00:18:32.566 raid5f : 5.26 361.57 22.60 0.00 0.00 8741443.37 226.70 382031.52 00:18:32.566 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:32.566 Verification LBA range: start 0x200 length 0x200 00:18:32.566 raid5f : 5.15 468.39 29.27 0.00 0.00 6841783.16 355.23 296120.13 00:18:32.566 [2024-11-21T03:27:20.132Z] =================================================================================================================== 00:18:32.566 [2024-11-21T03:27:20.132Z] Total : 829.95 51.87 0.00 0.00 7679376.62 226.70 382031.52 00:18:32.845 00:18:32.845 real 0m5.998s 00:18:32.845 user 0m11.188s 00:18:32.845 sys 0m0.241s 00:18:32.845 03:27:20 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:32.845 03:27:20 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:18:32.845 ************************************ 00:18:32.845 END TEST bdev_verify_big_io 00:18:32.845 ************************************ 00:18:32.845 03:27:20 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:32.845 03:27:20 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:32.845 03:27:20 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:32.845 03:27:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:32.845 ************************************ 00:18:32.845 START TEST bdev_write_zeroes 00:18:32.845 ************************************ 00:18:32.845 03:27:20 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:33.104 [2024-11-21 03:27:20.486364] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:18:33.104 [2024-11-21 03:27:20.486504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102915 ] 00:18:33.104 [2024-11-21 03:27:20.627855] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:33.104 [2024-11-21 03:27:20.665001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.364 [2024-11-21 03:27:20.694067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.364 Running I/O for 1 seconds... 00:18:34.747 30471.00 IOPS, 119.03 MiB/s 00:18:34.747 Latency(us) 00:18:34.747 [2024-11-21T03:27:22.313Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.747 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:34.747 raid5f : 1.01 30455.22 118.97 0.00 0.00 4191.36 1320.94 5912.12 00:18:34.747 [2024-11-21T03:27:22.313Z] =================================================================================================================== 00:18:34.747 [2024-11-21T03:27:22.313Z] Total : 30455.22 118.97 0.00 0.00 4191.36 1320.94 5912.12 00:18:34.747 00:18:34.747 real 0m1.721s 00:18:34.747 user 0m1.369s 00:18:34.747 sys 0m0.241s 00:18:34.747 03:27:22 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:34.747 03:27:22 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:18:34.747 ************************************ 00:18:34.747 END TEST bdev_write_zeroes 00:18:34.747 ************************************ 00:18:34.747 03:27:22 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:34.747 03:27:22 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:34.747 03:27:22 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:34.747 03:27:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:34.747 ************************************ 00:18:34.747 START TEST bdev_json_nonenclosed 00:18:34.747 ************************************ 00:18:34.747 03:27:22 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:34.747 [2024-11-21 03:27:22.277366] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:18:34.747 [2024-11-21 03:27:22.277482] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102957 ] 00:18:35.007 [2024-11-21 03:27:22.416895] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:35.007 [2024-11-21 03:27:22.456894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.007 [2024-11-21 03:27:22.484727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.007 [2024-11-21 03:27:22.484830] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:18:35.007 [2024-11-21 03:27:22.484849] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:35.007 [2024-11-21 03:27:22.484858] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:35.268 00:18:35.268 real 0m0.391s 00:18:35.268 user 0m0.158s 00:18:35.268 sys 0m0.129s 00:18:35.268 03:27:22 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:35.268 03:27:22 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:18:35.268 ************************************ 00:18:35.268 END TEST bdev_json_nonenclosed 00:18:35.268 ************************************ 00:18:35.268 03:27:22 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:35.268 03:27:22 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:35.268 03:27:22 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:35.268 03:27:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:35.268 ************************************ 00:18:35.268 START TEST bdev_json_nonarray 00:18:35.268 ************************************ 00:18:35.268 03:27:22 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:35.268 [2024-11-21 03:27:22.746919] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.11.0-rc3 initialization... 00:18:35.268 [2024-11-21 03:27:22.747061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102977 ] 00:18:35.529 [2024-11-21 03:27:22.888793] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:35.529 [2024-11-21 03:27:22.925725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.529 [2024-11-21 03:27:22.953328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.529 [2024-11-21 03:27:22.953436] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:18:35.529 [2024-11-21 03:27:22.953453] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:35.529 [2024-11-21 03:27:22.953463] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:35.529 00:18:35.529 real 0m0.393s 00:18:35.529 user 0m0.158s 00:18:35.529 sys 0m0.130s 00:18:35.529 03:27:23 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:35.529 03:27:23 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:18:35.529 ************************************ 00:18:35.529 END TEST bdev_json_nonarray 00:18:35.529 ************************************ 00:18:35.790 03:27:23 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:18:35.790 03:27:23 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:18:35.790 03:27:23 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:18:35.790 03:27:23 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:18:35.790 03:27:23 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:18:35.790 03:27:23 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:18:35.790 03:27:23 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:35.790 03:27:23 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:18:35.790 03:27:23 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:18:35.790 03:27:23 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:18:35.790 03:27:23 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:18:35.790 00:18:35.790 real 0m34.739s 00:18:35.790 user 0m47.047s 00:18:35.790 sys 0m4.608s 00:18:35.790 03:27:23 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:35.790 03:27:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:35.790 ************************************ 00:18:35.790 END TEST blockdev_raid5f 00:18:35.790 ************************************ 00:18:35.790 03:27:23 -- spdk/autotest.sh@194 -- # uname -s 00:18:35.790 03:27:23 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:18:35.790 03:27:23 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:18:35.790 03:27:23 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:18:35.790 03:27:23 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:18:35.790 03:27:23 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:18:35.790 03:27:23 -- spdk/autotest.sh@260 -- # timing_exit lib 00:18:35.790 03:27:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:35.790 03:27:23 -- common/autotest_common.sh@10 -- # set +x 00:18:35.790 03:27:23 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:18:35.790 03:27:23 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:18:35.790 03:27:23 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:18:35.790 03:27:23 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:35.790 03:27:23 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:18:35.790 03:27:23 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:18:35.790 03:27:23 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:18:35.790 03:27:23 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:18:35.790 03:27:23 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:18:35.790 03:27:23 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:18:35.790 03:27:23 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:18:35.790 03:27:23 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:18:35.790 03:27:23 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:18:35.790 03:27:23 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:18:35.790 03:27:23 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:18:35.790 03:27:23 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:18:35.790 03:27:23 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:18:35.790 03:27:23 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:18:35.790 03:27:23 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:18:35.790 03:27:23 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:18:35.790 03:27:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:35.790 03:27:23 -- common/autotest_common.sh@10 -- # set +x 00:18:35.790 03:27:23 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:18:35.790 03:27:23 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:18:35.790 03:27:23 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:18:35.790 03:27:23 -- common/autotest_common.sh@10 -- # set +x 00:18:38.333 INFO: APP EXITING 00:18:38.333 INFO: killing all VMs 00:18:38.333 INFO: killing vhost app 00:18:38.333 INFO: EXIT DONE 00:18:38.593 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:38.593 Waiting for block devices as requested 00:18:38.854 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:38.854 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:39.796 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:39.796 Cleaning 00:18:39.796 Removing: /var/run/dpdk/spdk0/config 00:18:39.796 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:18:39.796 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:18:39.796 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:18:39.796 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:18:39.796 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:18:39.796 Removing: /var/run/dpdk/spdk0/hugepage_info 00:18:39.796 Removing: /dev/shm/spdk_tgt_trace.pid71033 00:18:40.056 Removing: /var/run/dpdk/spdk0 00:18:40.056 Removing: /var/run/dpdk/spdk_pid100147 00:18:40.056 Removing: /var/run/dpdk/spdk_pid101056 00:18:40.056 Removing: /var/run/dpdk/spdk_pid101371 00:18:40.056 Removing: /var/run/dpdk/spdk_pid102036 00:18:40.056 Removing: /var/run/dpdk/spdk_pid102294 00:18:40.056 Removing: /var/run/dpdk/spdk_pid102334 00:18:40.056 Removing: /var/run/dpdk/spdk_pid102359 00:18:40.056 Removing: /var/run/dpdk/spdk_pid102588 00:18:40.056 Removing: /var/run/dpdk/spdk_pid102750 00:18:40.056 Removing: /var/run/dpdk/spdk_pid102833 00:18:40.056 Removing: /var/run/dpdk/spdk_pid102915 00:18:40.056 Removing: /var/run/dpdk/spdk_pid102957 00:18:40.056 Removing: /var/run/dpdk/spdk_pid102977 00:18:40.056 Removing: /var/run/dpdk/spdk_pid70858 00:18:40.056 Removing: /var/run/dpdk/spdk_pid71033 00:18:40.056 Removing: /var/run/dpdk/spdk_pid71240 00:18:40.056 Removing: /var/run/dpdk/spdk_pid71333 00:18:40.056 Removing: /var/run/dpdk/spdk_pid71361 00:18:40.056 Removing: /var/run/dpdk/spdk_pid71473 00:18:40.056 Removing: /var/run/dpdk/spdk_pid71491 00:18:40.056 Removing: /var/run/dpdk/spdk_pid71679 00:18:40.056 Removing: /var/run/dpdk/spdk_pid71764 00:18:40.056 Removing: /var/run/dpdk/spdk_pid71857 00:18:40.056 Removing: /var/run/dpdk/spdk_pid71957 00:18:40.056 Removing: /var/run/dpdk/spdk_pid72043 00:18:40.056 Removing: /var/run/dpdk/spdk_pid72078 00:18:40.056 Removing: /var/run/dpdk/spdk_pid72119 00:18:40.056 Removing: /var/run/dpdk/spdk_pid72185 00:18:40.056 Removing: /var/run/dpdk/spdk_pid72304 00:18:40.056 Removing: /var/run/dpdk/spdk_pid72737 00:18:40.056 Removing: /var/run/dpdk/spdk_pid72790 00:18:40.056 Removing: /var/run/dpdk/spdk_pid72842 00:18:40.056 Removing: /var/run/dpdk/spdk_pid72858 00:18:40.056 Removing: /var/run/dpdk/spdk_pid72927 00:18:40.056 Removing: /var/run/dpdk/spdk_pid72943 00:18:40.056 Removing: /var/run/dpdk/spdk_pid73012 00:18:40.056 Removing: /var/run/dpdk/spdk_pid73028 00:18:40.056 Removing: /var/run/dpdk/spdk_pid73070 00:18:40.056 Removing: /var/run/dpdk/spdk_pid73088 00:18:40.056 Removing: /var/run/dpdk/spdk_pid73130 00:18:40.056 Removing: /var/run/dpdk/spdk_pid73148 00:18:40.056 Removing: /var/run/dpdk/spdk_pid73297 00:18:40.056 Removing: /var/run/dpdk/spdk_pid73328 00:18:40.056 Removing: /var/run/dpdk/spdk_pid73416 00:18:40.056 Removing: /var/run/dpdk/spdk_pid74583 00:18:40.056 Removing: /var/run/dpdk/spdk_pid74789 00:18:40.056 Removing: /var/run/dpdk/spdk_pid74918 00:18:40.056 Removing: /var/run/dpdk/spdk_pid75523 00:18:40.056 Removing: /var/run/dpdk/spdk_pid75723 00:18:40.056 Removing: /var/run/dpdk/spdk_pid75852 00:18:40.056 Removing: /var/run/dpdk/spdk_pid76457 00:18:40.056 Removing: /var/run/dpdk/spdk_pid76776 00:18:40.056 Removing: /var/run/dpdk/spdk_pid76905 00:18:40.056 Removing: /var/run/dpdk/spdk_pid78257 00:18:40.056 Removing: /var/run/dpdk/spdk_pid78499 00:18:40.317 Removing: /var/run/dpdk/spdk_pid78628 00:18:40.317 Removing: /var/run/dpdk/spdk_pid79980 00:18:40.317 Removing: /var/run/dpdk/spdk_pid80222 00:18:40.317 Removing: /var/run/dpdk/spdk_pid80357 00:18:40.317 Removing: /var/run/dpdk/spdk_pid81702 00:18:40.317 Removing: /var/run/dpdk/spdk_pid82138 00:18:40.317 Removing: /var/run/dpdk/spdk_pid82267 00:18:40.317 Removing: /var/run/dpdk/spdk_pid83697 00:18:40.317 Removing: /var/run/dpdk/spdk_pid83951 00:18:40.317 Removing: /var/run/dpdk/spdk_pid84085 00:18:40.317 Removing: /var/run/dpdk/spdk_pid85521 00:18:40.317 Removing: /var/run/dpdk/spdk_pid85769 00:18:40.317 Removing: /var/run/dpdk/spdk_pid85900 00:18:40.317 Removing: /var/run/dpdk/spdk_pid87343 00:18:40.317 Removing: /var/run/dpdk/spdk_pid87820 00:18:40.317 Removing: /var/run/dpdk/spdk_pid87949 00:18:40.317 Removing: /var/run/dpdk/spdk_pid88078 00:18:40.317 Removing: /var/run/dpdk/spdk_pid88485 00:18:40.317 Removing: /var/run/dpdk/spdk_pid89198 00:18:40.317 Removing: /var/run/dpdk/spdk_pid89563 00:18:40.317 Removing: /var/run/dpdk/spdk_pid90235 00:18:40.317 Removing: /var/run/dpdk/spdk_pid90659 00:18:40.317 Removing: /var/run/dpdk/spdk_pid91401 00:18:40.317 Removing: /var/run/dpdk/spdk_pid91794 00:18:40.317 Removing: /var/run/dpdk/spdk_pid93704 00:18:40.317 Removing: /var/run/dpdk/spdk_pid94137 00:18:40.317 Removing: /var/run/dpdk/spdk_pid94560 00:18:40.317 Removing: /var/run/dpdk/spdk_pid96599 00:18:40.317 Removing: /var/run/dpdk/spdk_pid97073 00:18:40.317 Removing: /var/run/dpdk/spdk_pid97555 00:18:40.317 Removing: /var/run/dpdk/spdk_pid98592 00:18:40.317 Removing: /var/run/dpdk/spdk_pid98904 00:18:40.317 Removing: /var/run/dpdk/spdk_pid99828 00:18:40.317 Clean 00:18:40.317 03:27:27 -- common/autotest_common.sh@1453 -- # return 0 00:18:40.317 03:27:27 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:18:40.317 03:27:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:40.317 03:27:27 -- common/autotest_common.sh@10 -- # set +x 00:18:40.582 03:27:27 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:18:40.582 03:27:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:40.582 03:27:27 -- common/autotest_common.sh@10 -- # set +x 00:18:40.582 03:27:27 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:18:40.582 03:27:27 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:18:40.582 03:27:27 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:18:40.582 03:27:27 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:18:40.582 03:27:27 -- spdk/autotest.sh@398 -- # hostname 00:18:40.582 03:27:27 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:18:40.842 geninfo: WARNING: invalid characters removed from testname! 00:19:02.796 03:27:49 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:05.329 03:27:52 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:06.705 03:27:54 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:08.609 03:27:56 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:11.144 03:27:58 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:12.546 03:28:00 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:14.452 03:28:01 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:19:14.452 03:28:02 -- spdk/autorun.sh@1 -- $ timing_finish 00:19:14.452 03:28:02 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:19:14.452 03:28:02 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:19:14.452 03:28:02 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:19:14.453 03:28:02 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:14.713 + [[ -n 6166 ]] 00:19:14.713 + sudo kill 6166 00:19:14.724 [Pipeline] } 00:19:14.740 [Pipeline] // timeout 00:19:14.746 [Pipeline] } 00:19:14.760 [Pipeline] // stage 00:19:14.765 [Pipeline] } 00:19:14.780 [Pipeline] // catchError 00:19:14.792 [Pipeline] stage 00:19:14.794 [Pipeline] { (Stop VM) 00:19:14.807 [Pipeline] sh 00:19:15.093 + vagrant halt 00:19:17.635 ==> default: Halting domain... 00:19:25.784 [Pipeline] sh 00:19:26.067 + vagrant destroy -f 00:19:28.608 ==> default: Removing domain... 00:19:28.622 [Pipeline] sh 00:19:28.908 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:19:28.918 [Pipeline] } 00:19:28.932 [Pipeline] // stage 00:19:28.937 [Pipeline] } 00:19:28.952 [Pipeline] // dir 00:19:28.957 [Pipeline] } 00:19:28.971 [Pipeline] // wrap 00:19:28.977 [Pipeline] } 00:19:28.990 [Pipeline] // catchError 00:19:28.999 [Pipeline] stage 00:19:29.001 [Pipeline] { (Epilogue) 00:19:29.015 [Pipeline] sh 00:19:29.301 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:19:33.540 [Pipeline] catchError 00:19:33.542 [Pipeline] { 00:19:33.556 [Pipeline] sh 00:19:33.842 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:19:33.842 Artifacts sizes are good 00:19:33.852 [Pipeline] } 00:19:33.867 [Pipeline] // catchError 00:19:33.878 [Pipeline] archiveArtifacts 00:19:33.885 Archiving artifacts 00:19:33.985 [Pipeline] cleanWs 00:19:33.997 [WS-CLEANUP] Deleting project workspace... 00:19:33.997 [WS-CLEANUP] Deferred wipeout is used... 00:19:34.004 [WS-CLEANUP] done 00:19:34.006 [Pipeline] } 00:19:34.022 [Pipeline] // stage 00:19:34.028 [Pipeline] } 00:19:34.042 [Pipeline] // node 00:19:34.047 [Pipeline] End of Pipeline 00:19:34.091 Finished: SUCCESS